id
stringlengths 30
34
| text
stringlengths 0
75.5k
| industry_type
stringclasses 1
value |
---|---|---|
2015-48/0317/en_head.json.gz/6239 | ORANGE NETWORKING
138 Ridge TrlChapel Hill, NC 27516
Orange Networking's (ON) main goal is to foster equal access to the Internet so that all people may benefit from the use of digital communication tools. ON shall provide support to people who live or work in Orange County, North Carolina in the use of open, safe, and accessible computer networks. It shall deliver training so people can use and maintain open wireless networks. It will work to promote public networks and the creation of connections to the Internet via wireless and other future similar technologies. There are four areas of focus on which ON will concentrate:Advocacy, Wireless networks, Technology support, and Technology education.ON will act as an advocate for the people of Orange County on complicated technology issues. We will stay in touch with their needs and look into the future for new technology resources that can help them. We will work with local county and town governments to serve people in the area of digital inclusion and equal access.ON will promote the creation of community wireless networks that are intelligently designed and community-focused. We believe community wireless networks should exist to bring equal access to underserved people by sharing the power of the Internet, computer hardware, and software. The systems must be open, safe, and secure for all to use. Closed, restricted, or filtered networks do not serve everyone equally.ON will provide technical support for community wireless networks. We'll also provide web services including online knowledge bases, tutorials, how-tos, and security services. The majority of the street level support will be done by community members. We shall conduct support training so adults and teens can provide technology support for their own communities. We'll support the supporters by acting as a community employer, mentor, and community digital tool shed--a place where hardware and software can be obtained to serve the community.ON will conduct regular public training on construction, support, and maintenance of community wireless networks. The purpose is to increase the number of community technology experts. The more people who know tech the more support will be available.Information wants to be free. Information openness will serve us all more successfully and less expensively. More informed community members serve us all.
www.orangenetworking.org
ORANGE NETWORKING does not have any active petitions. | 计算机 |
2015-48/0317/en_head.json.gz/7837 | Contact Advertise GNOME 3 Released
Linked by Thom Holwerda on Wed 6th Apr 2011 17:50 UTC, submitted by Cytor The day is finally here, the day that the GNOME team releases GNOME 3.0, the first major revision of the GNOME project since 2002. Little of GNOME 2.x is left in GNOME 3.0, and as such, you could call it GNOME's KDE4. We're living in fortunate times, what, with two wildly divergent open source desktops.
11 · Read More · 206 Comment(s) http://osne.ws/izl Thread beginning with comment 469570
RE[5]: Sigh... by tuma324 on Thu 7th Apr 2011 20:43 UTC in reply to "RE[4]: Sigh..." Member since:
"The GNOME 2.x interface is dated, GNOME 3 provides a new and innovative interface. But how is it 'dated'? I never said that it should remain static, as I implied in the prior post the primary focus should be on the backend with the UI keeping the same but with some refinements. The GNOME way of doing things work, sure the new preferences in GNOME 3.0 is really nice but it could have been achieved without the need of GNOME shell appearing. Personally I would have sooner seen the move to the 'global menu' idea that was floated at one point than seeing the GNOME shell but then again since I don't use GNOME my opinion doesn't really count for much at the end of the day. Because GNOME is not Mac OS X or Windows. In a few years you will probably realize that moving forward with the GUI is also important. Learn to adapt yourself to changes, it's not that hard, it will only do you good. I suggest you look through Macrumors at the wailing and gnashing of teeth when it comes to people complaining about some pretty trivial changes that have appeared in Mac OS X Lion. I'm not complaining about these changes, I think change is good if done for the right reasons but one has to realise that for the vast majority of people they have never learned the fundamental conceptual underpinnings of a UI thus any slight change to the UI throws them off. " Incremental development and changes is nice as you say, but there has to be a point where you have to say "Let's recreate this desktop and make it 1000 times better." or whatever they said. And then start from scratch, looking for the future, and write something more beautiful than before with strong foundations for the present and future, something that doesn't look just beautiful but something that is easy to maintain and something that performs better in every way. How is that a bad thing? It has to be done sooner or later, we can't just make incremental and little tweaks to a desktop that is showing its age when the competition (KDE4, Windows, Mac OS X) look much better in just every aspect of their GUI. I think the GNOME team did a great job with GNOME 3, and things like that is what we need on Linux, people who are brave enough to say "We can do better than this and we will do it.". and I believe that's good, competition and innovation is good, it will only benefit the users in the long run, and I hope Desktop Linux continues with that path, I hope running the major DEs on Wayland is the next thing. I understand that it might be annoying for some users to have to relearn the Desktop, but I don't think it'll be that hard, I mean, it's not that different and it's not that bad if you think about it, it just depends how you look at it. Edited 2011-04-07 21:02 UTC | 计算机 |
2015-48/0317/en_head.json.gz/7892 | PS2 Previews: Rayman 2: Revolution Preview
Rayman 2: Revolution Preview
Already making a huge splash on the DC, N64, PC and getting much momentum for the upcoming PS version, Rayman 2 seems to be the talk of the town these days. The Nintendo version was the first one released and out of all the current versions is the weakest in every category. The PC version followed shortly and is considered to be the second to best. The Dreamcast version is without a doubt the best one. We have one in the office and I have spent a lot of time playing the DC game. It is quite a remarkable game, the graphics are already pushing the DC and I think it will take a very long time before I see another or play another console platformer like Rayman 2. The main point of the game is to find your lost friend Globox, I won't reveal any more info in the intro. If you want to know more you have to read. Visually UbiSoft promises sharper, smoother, and more enhanced visuals. The PS2 will be using its technological advantages over Sega to make the best of Rayman 2. Take a look at the first screens of Rayman 2: Revolution, notice how the graphics look much more sharper than its DC counterpart. Not only that, but the lighting in all of the stages is set more realistically. The improvement in detail is much smoother than the PC version or the DC version. Having played both of them that is an easy bet to place. Rayman has never looked so good, Ubisoft's promise seems to be very true. More on the visuals as more screens and possibly videos are released. Ubisoft has said that instead of the linear gameplay which is in all of the current Rayman 2 games, the PS2 version will let gamers roam around more. What I loved about the DC version of Rayman 2 was the awesome control, I felt as if the game was made to be played on the DC pad. The layout of the buttons was ideal for my hand. The A and B buttons were the main action buttons, while X and Y rotated the camera. Then we have the shoulder buttons, the left one locks Rayman on to an enemy and the right one brings up his status. The first thing that will be noticed is the level change that the PS2 version of Rayman 2 will receive. Ubisoft says that the PS2 levels in Rayman 2 will be re-designed, but not to the point that they are completely different from the DC or PC counterpart. On a final note, Rayman2: Revolution has also been reported to be the hardest version out of the three current ones and the upcoming PS version. My one question is whether or not the PS2 version will feature voice acting like its Playstation brother. Rayman 2: Revolution should be released before the year's end on PS2. I am currently eagerly waiting for the PS version and now I can add the PS2 version to the list as well. It seems as if Ubi Soft is trying to make each Rayman games as distinct as possible from one another. My advice is that if you are an owner (or will own) the DC, PS2, or PS, I suggest picking up all three versions of Rayman 2 (don't buy all three, I am just making a point here). Their distinct qualities make them all feel like new adventures, especially the PS version. Ubi Soft will be implementing voice acting as well as new stages for the original PS gamers.
6/12/2000 SolidSnake | 计算机 |
2015-48/0317/en_head.json.gz/8646 | Clothing manipulation
PDF Get this Article
see source materials below for more options
Takeo Igarashi
University of Tokyo 7-3-1 Hongo, Bunkyo, Tokyo
John F. Hughes
Brown University Providence, RI
· Proceeding
UIST '02 Proceedings of the 15th annual ACM symposium on User interface software and technology Pages 91-100 ACM New York, NY, USA ©2002 table of contents
ISBN:1-58113-488-6 doi>10.1145/571985.571999
2002 Article
· Downloads (6 Weeks): 1
· Downloads (12 Months): 24
· Downloads (cumulative): 1,604
· Citation Count: 31 Recent authors with related interests
Concepts in this article
Concepts inClothing manipulation
Textile A textile or cloth is a flexible woven material consisting of a network of natural or artificial fibres often referred to as thread or yarn. Yarn is produced by spinning raw fibres of wool, flax, cotton, or other material to produce long strands. Textiles are formed by weaving, knitting, crocheting, knotting, or pressing fibres together. The words fabric and cloth are used in textile assembly trades as synonyms for textile. more from Wikipedia
Clothing Clothing is a term that refers to a covering for the human body that is worn. The wearing of clothing is exclusively a human characteristic and is a feature of nearly all human societies. The amount and type of clothing worn depends on physical, social and geographic considerations. Physically, clothing serves many purposes; it can serve as protection from the elements, it can enhance safety during hazardous activities such as hiking and cooking. more from Wikipedia
Three-dimensional space Three-dimensional space is a geometric 3-parameters model of the physical universe (without considering time) in which we live. These three dimensions are commonly called length, width, and depth (or height), although any three directions can be chosen, provided that they do not lie in the same plane. In physics and mathematics, a sequence of n numbers can be understood as a location in n-dimensional space. When n = 3, the set of all such locations is called 3-dimensional Euclidean space. more from Wikipedia
Cloth modeling Cloth modeling is the term used for simulating cloth within a computer program; usually in the context of 3D computer graphics. The main approaches used for this may be classified into three basic types: geometric, physical, and particle/energy. more from Wikipedia
Drag-and-drop more from Wikipedia
3D computer graphics 3D computer graphics (in contrast to 2D computer graphics) are graphics that use a three-dimensional representation of geometric data that is stored in the computer for the purposes of performing calculations and rendering 2D images. Such images may be stored for viewing later or displayed in real-time. 3D computer graphics rely on many of the same algorithms as 2D computer vector graphics in the wire-frame model and 2D computer raster graphics in the final rendered display. more from Wikipedia
Interaction technique An interaction technique, user interface technique or input technique is a combination of hardware and software elements that provides a way for computer users to accomplish a single task. For example, one can go back to the previously visited page on a Web browser by either clicking a button, pressing a key, performing a mouse gesture or uttering a speech command. It is a widely-used term in human-computer interaction. more from Wikipedia
Paint Paint is any liquid, liquefiable, or mastic composition which, after application to a substrate in a thin layer, is converted to a solid film. It is most commonly used to protect, color or provide texture to objects. more from Wikipedia
Recommend the ACM DLto your organization
TOC Service:
Save to Binder
Export Formats:
ACM Ref
Upcoming Conference:
UIST '16
Author Tags
graphical user interfaces
Switch to single page view (no tabs) **Javascript is not enabled and is required for the "tabbed view" or switch to the single page view**
Powered by The ACM Digital Library is published by the Association for Computing Machinery. Copyright © 2015 ACM, Inc.
Terms of Usage Privacy Policy Code of Ethics Contact Us
Useful downloads: Adobe Reader
window.usabilla_live = lightningjs.require("usabilla_live", "//w.usabilla.com/2348f26527a9.js");
Did you know the ACM DL App is now available?
Did you know your Organization can subscribe to the ACM Digital Library?
The ACM Guide to Computing Literature
Export Formats | 计算机 |
2015-48/0317/en_head.json.gz/8762 | Internet Tumblr Goes iOS 7
Nathan Snelgrove on December 28th 2013
blogging, iOS 7, social networking, tumblr 25
I love Tumblr. I didn’t think I would, after I saw some of the stuff that my younger sister was always checking out on it, but sometimes I surprise even myself. At this point, I keep two blogs up on the site: a rarely-updated personal blog that’s not even worth linking to and a music recommendations blog called Unsung Sundays, and I couldn’t be happier with the service. It’s one of the few CMS systems that doesn’t feel totally broken.
That being said, I didn’t think I’d ever start making posts on my iOS devices and keeping them. While the Tumblr app for iPhone and iPad has been capable of doing that sort of thing for a while, it hasn’t always been as smooth of an experience as it’s been on the desktop. But recently, that changed with the iOS 7 update for Tumblr, which makes it the best blogging app on iOS bar none. Read on to find out why you might consider taking up blogging again, but from your phone.
Lifestyle Inspire and Be Inspired with We Heart It
Hannah Richards on August 19th 2013
photo sharing, pinterest, social networking 28
I am a longtime Pinterest user and can easily spend an entire afternoon browsing, pinning and pinning some more, so you can imagine my delight when I stumbled upon We Heart It by chance in the New & Noteworthy section of the App Store.
We Heart It was founded in 2008 and is slowly catching up with Pinterest in terms of monthly users and the amount of content available. But, the question is, do you and I really need another photo-sharing service, let alone one that is hailed to be as addictive as Pinterest? Click “more” to find out.
Lifestyle Zephyr Makes App.net Look Elegant
Nathan Snelgrove on May 20th 2013
ADN, app.net, social, social networking 2
I’ve written my fair share of articles about App.net and the clients I test out, but there’s always new ones out there that I want to try. I have yet to find the ADN client that fits every one of my needs.
I’m aware, of course, that most people are using Netbot these days; It’s free and admittedly awesome. But it’s wearing Tweetbot‘s clothes, and I want my ADN experience to feel visually unique from Twitter without losing the power of Tapbot’s app. In the past, I’ve tried Rivr (for iPhone), which was full of features and pleasant to look at, but after several weeks of use, it didn’t capture my attention anymore and I was back to Netbot (which also has an iPad app).
Zephyr is the closest I’ve come to the Netbot experience. In colloquial terms, I’m really stoked about this app. Read on to find out why. (more…)
Lifestyle Record and Share Your Voice and Other Audio With Dubbler
Marie Look on May 8th 2013
audio, dubbler, social networking 2
The latest in the realm of audio social networking has arrived in the form of Dubbler, an app that allows users to record up to 60 seconds, then edit the sound bite with voice filters and share it with the world.
What you can do with Dubbler is limited only by your imagination — sing, tell short (very short) stories or jokes, provide pop culture commentary, spread some news, it’s up to you. Click through to see how it works. (more…)
Lifestyle Read Through Your App.net Stream With Rivr
Nathan Snelgrove on May 8th 2013
ADN, app.net, social networking 0
App.net (or ADN) features an unusual business model, particularly for a social network. The idea of a paid social network that depends on its developers to advance the platform is unheard of, but comes with some significant benefits. For one thing, you know your information is private. App.net isn’t going to sell anything you post. For some people, this is enough to differentiate the “Twitter clone” from Twitter. But perhaps more interestingly (and not unlike the Twitter of old), it encourages developers to race towards innovation.
Rivr is trying to claim a piece of that innovation for itself. Rivr is an app that focuses on making your stream beautiful and intuitive, and it’s not afraid to bend some interface rules to get it done. Rivr is extremely functional, but you might be wondering if its ease of use gets lost in all this extra functionality, or if it’s as easy to use as Netbot, perhaps the most popular ADN client in town. Read on to find out. (more…)
Lifestyle Document Your Day With Lightt
Hannah Richards on February 20th 2013
life, lightt, social networking 7
Like it or loathe it, the increased desire to document and share each and every aspect of daily life is here to stay, with Instagram, Hipstamatic and a whole host of photo sharing clones dominating the App Store year after year. As the old saying goes, a picture is worth a thousand words. But what if you’d like to say just that little bit more?
Enter Lightt, an app which captures normal day-to-day activities via a series of photographs that are then merged together to create a seamless visual timeline of your life. Sound interesting? Hit the jump to find out more! (more…)
Lifestyle Foursquare 5.0: Check Ins, Revisited
Connor Turnbull on June 21st 2012
check ins, location, social networking 1
Foursquare has always been the dominant force in the location-centric social networking game. As of April 2012, They have twenty million registered users, and an average of around three million check-ins per day, most likely coming from mobile platforms.
Recently, they released a significant redesign of it’s service, pushing out an update that transforms the iPhone app by an order of magnitude. Alongside a brand new interface, Foursquare brought a number of new features to it’s iPhone app in a completely re-imagined package. Let’s take a fresh look at Foursquare, and see what some have described as an entirely new app. (more…)
Internet Keeping Updated on your Lists using Tweetlist
Jake Rocheleau on April 9th 2012
lists, social networking, tweets, twitter 1
There are a large handful of Twitter clients in the iOS App Store. Both iPad and iPhone users have a wide variety to choose from, and we are seeing new applications enter the store each month.
Tweetlist is another such client with a pretty big twist: you can quickly flip through all of your different Twitter lists from a single easy-to-use interface. Let’s take a look after the break.
Lifestyle Pinterest: A Social Network With Legs
Kevin Whipps on February 22nd 2012
facebook, pins, social networking, twitter 14
There are so many new social networks out there that it gets daunting just to keep up. Between Facebook, Twitter, Path, Oink, Tumblr and everything else, who has time to actually get anything done? That’s why, for me, it takes an awful lot to decide to come onboard a new system.
But then Pinterest happened. At first, I wasn’t really sure if I liked it — it did seem a bit girly for my taste — and I wasn’t quite sure how it would fit into my life. But then I got the iPhone app, and a new perspective came up that I hadn’t really considered before: could I be social while still being unsocial? (more…)
Music Create the Soundtrack For Your Life With SoundTracking
Jesse Virgil on January 5th 2012
Music, social media, social networking, sountracking 1
Recently, I reviewed an niche social networking app called Oink, which let’s you share the things you love with friends. Mainly, Oink is used to share a specific item, such as a Big Mac at McDonald’s or your favorite cup of coffee at the local diner. While Oink fills this particular niche nicely, other apps are available in the App Store that fill the roll other social niches. Instagram, for example, allows users to share photographs, and a little-known but highly usable app called Peepapp allows you to share the apps you’ve installed on your iPhone.
While food, photos and apps (especially apps) are great to share with friends, music is often one the most shared topics of discussion. Enter SoundTracking, the nifty little app that helps you “share the soundtrack of your life.” | 计算机 |
2015-48/0317/en_head.json.gz/9319 | Grady Booch On Rational Unified Process
Ok, I'm about to tell you everything you need to know about the RUP...all else is simply details.
First, the essence of the RUP may be expressed in its six best practices. These are not an invention, but instead are a reflection of what we've seen in working with literally tens of thousands of projects over the past 20 years, codifying what works in successful organizations and what is noticeably absent in unsuccessful ones:Is this useful....
Develop iteratively (every project has a regular rhythm, manifest as a series of continuous, regular releases of executables)
Model visually (we model so that we may better reason about the systems we are trying to build; the UML exists as the standard means of visualizing, specifying, constructing, and documenting these artifacts of a software-intensive system)
Manage requirements (everything is driven by use cases/stories which are continuously discovered, refined, and managed in the rhythm of the project, and so in turn drive unit and systems test as well as the system's architecture and therefore implementation)
Control changes (change is good insofar as it is directed by aggressively attacking the risks to success in the system)
Continuously verify quality (test continuously, using the use cases/stories as the baseline, and use these tests to measure progress and identify risks along the way)
Use component-based architectures (one grows a system's architecture through each iteration; we validate and verify the system's essential architecture early on so as to aggressively attack all technical risks and to raise the level of abstraction for the rest of the team by explicitly making manifest the design patterns/mechanisms and architectural patterns that pervade the system.
Second, there are a few implicit practices:
Develop only what is necessary
Focus on valuable results, not on how the results are achieved
Minimize the production of paperwork
Be flexible
Learn from your mistakes
Revisit your risks regularly
Establish objective, measurable criteria for your progress
Automate what is human-intensive, tedious, and error-prone
Use small, empowered teams
Have a plan
Third, there's some terminology worth noting, so that we level set the vocabulary of the process to all stakeholders:
The conduct of a project tends to follow four major phases, the end of each which represents an important business gate: inception (when the business case for the project is established and the basic boundaries of what's in and what's out of that case are drawn; at the end of inception we can say "yes, we should do this"), elaboration (gated by the establishment of the system's essential use cases and architecture, representing a direct attack that confronts the essential technical risks to the system; at the end of elaboration we can say "yes, we know we can do this"), construction (wherein the system is grown through the successive refinement of the system's architecture, along with the continuous discovery of the system's desired behavior; at the end of construction we can say "yes, the system is in a place where it may be fully deployed to its users" [it may have been incrementally deployed during construction, of course]), and finally, transition (wherein the business decision is made that aggressive investment in the system is no longer necessary, and the system enters a plateau of use and maintenance; at the end of transition we can say "this system is at end of life")
Since the RUP is about deploying systems for which software is an essential element (acknowledging that executables are the essential artifact that's produced continuously but that this labor has a business, economic, and technological context whose stakeholders contribute), there are several disciplines that engage in the development activity: business modeling, requirements, analysis and design, implementation, test, deployment, configuration and change management, project management, and environment. These disciplines represent different stakeholders, different views into the system, and different skill sets. Each discipline has its own rhythm, but all in harmony with the essential construction rhythm of the project as a whole.
In the conduct of a project, the RUP provides a common vocabulary for those things that get created during a project (artifacts), the roles/skill sets who create those things (workers) and the concurrent, interweaving workflows that those workers typically follow to manipulate those artifacts (activities). That's all you really need to know...
-- GradyBooch on the XpMailingList
Anyhow, looking at the best practices listed here versus an AgileProcess, the only explicit difference I can see is that "Control change" is philosophically opposed to "Embrace change". Control change is based on fear; embrace change is based on courage. ScrumProcess would also claim that control change isn't possible in a software project (at least not in the sense of "defined" process control; Scrum suggests using "empirical" process control instead).
Many would say that to embrace change you need to control it. It may be a semantics issue but I don't think embracing change implies throwing a blind eye to change.
If you use the word 'child' in place of 'change', you can see where my bias lies. 'Control change' is to 'embrace change' as 'control a child' is to 'embrace a child'. Thus, in my opinion, you don't need to control a child to embrace him or her. The two parenting philosophies are in fact very different, if not diametrically opposed. I'd say the same of software process philosophies. Of course, it's just an analogy and it's just my opinion.
As to 'throwing a blind eye to change', of course I agree with you; I'm just trying to highlight the underlying philosophical differences between RUP and more-agile methods. RUP assumes change should be controlled, like it is on an assembly line. Agile methods assume change should be adapted to because they also assume that it cannot be effectively controlled, like the wind; bend with the wind instead of being broken by it.
I think this comment misses the point. It does not say "Resist change". "Control change" refers to basic things like version control, do builds frequently to ensure the software continuous to work as change happens, defect and change tracking and doing things incrementally to adapt to changes during the project. To wit "Control Change" really refers to HOW TO "Embrace Change" | 计算机 |
2015-48/0317/en_head.json.gz/9986 | Shenmue 2
The Dreamcast's finest is finally ours... By Ben Jackson
The Dreamcast’s swan-song?
Finally, the wait is over. Yes, tired Dreamcast owners rejoice, It’s Shenmue II! After all of Sega’s ummming and ahhhing about it’s release, it has finally hit our shores. For all of you wondering what all of the fuss is about, where have you been? Shenmue was the most awesome game on the Dreamcast, so this should be even better, right? Well…. sort of.
The plot is simple, you play Ryu Hazuki, a young Japanese man who is on a quest for revenge against Lan Di, a nasty piece of work who killed Ryo’s father. The only leads that Ryo has is a letter from an address in Hong Kong and a mysterious Phoenix Mirror. Not a lot to go on by any means but Ryo seems rather peeved about the whole affair and will stop at nothing until revenge is his. Fair enough. I wouldn’t try to stop him, he lives in a Dojo! Anyhow, Shenmue II picks up where the first one left off. Ryo arriving in Hong Kong, very alone and with only one lead; the letter from Hong Kong. Go to it Ryo!
There have been several changes to the format this time around. Firstly, there is no dubbing. Gone are the English Voice overs of the original, the speech is all Japanese with English subtitles. Many may feel this is a bad thing, I however, think that it is an improvement. The game feels even more real now, I mean, the game is set in Hong Kong and China. Do they all speak in English? Nope. It all adds to the incredible sense of realism that the game emits.
Realism. Shenmue II positively oozes it. When it rains you just want to go for a walk in it, just to experience each drop and sound, then, when you go walk about in Wise Men’s Quadrant and the Jumbo flies overhead, you can’t help but be amazed. The graphics are that detailed and the sound is just awesome, I haven’t seen a game that comes anywhere near it, maybe GTA3 on the PS2, even that isn’t in the same league. However, these features were all apparent in the original, so what has been improved? Well, Shenmue II is roughly three times bigger than the original with backdrops including Aberdeen (In Hong Kong, not Scotland, duh!), Kowloon (a high-rise city outside Hong Kong), and rural China. Each scene is very different, Hong Kong is an airy cityscape with plenty of op | 计算机 |
2015-48/0317/en_head.json.gz/10101 | THE S&V INTERVIEW The S+V Interview: Mass Effect 3 Audio Lead Rob Blake
By Timothy J. Seppala Posted: Mar 1, 2012 The Reapers are upon us. Mass Effect 3 is out next Tuesday, and with it Commander Shepard's story is coming to a close. I took the opportunity to chat with the series' audio lead, Rob Blake about his team's work in defining the Mass Effect franchise. Over the course of his career he's worked on everything from feature films to Spongebob Squarepants games, but counts his efforts at developer BioWare as the most challenging gigs he's encountered."What we do here dwarfs anything I've done before," he told me.We discussed what makes Mass Effect sound like Mass Effect, working with Clint Mansell, the origins of the end credits song from the first game, and the difference between a salarian's singing voice and speaking voice. And he dished on the sounds of elcor sex.What's always stood out for me is how well-realized the audio has been in the Mass Effect series. The first game almost felt like a '70s sci-fi movie. People have said it sounds like John Carpenter did the soundtrack. It was very synthesizer-based; Mass Effect 2 was different and much more organic. What was the idea behind the shift in styles?It was more of a global change from the overall game rather than a specific musical direction change. The second one was more dark and gritty and we brought in a lot of morally ambiguous characters such as The Illusive Man - who you never quite trusted. You were going to darker, seedier parts of the universe. We wanted the music to reflect that.In the first Mass Effect - to the player, Shepard and humanity as a whole - we wanted it to have a clean and sterile feel to it, almost a sense of naivete in a way. The characters were almost all humans, it was like a journey of discovery to them; they were still figuring out who they were in the universe. The music was used to reinforce that.In the second one, things got a little bit darker. The threat of the Reapers was a little more understated in a way, and so that was reflected in the musical changes.The soundtrack to Mass Effect 2 spanned several different releases; the first was just one. Can we expect something similar with Mass Effect 3's soundtrack in terms of multiple albums?The split releases for Mass Effect 2 were due to DLC (additional downloadable content) releases. After the game came out, we did some interesting stuff and we had new music for each one. We tried out some new things. They were released to coincide with that. We haven't announced our DLC plan for Mass Effect 3, but I'm working on the soundtrack right now and it's going really well. I have all that I need. [Laughs]I was thinking in terms of how Mass Effect 2 had "Atmospheric," "Combat" as well as the main soundtrack.The "Atmospheric" and "Combat" soundtracks came out after the main one; it was important to get that stuff out. We had a lot of material for Mass Effect 2 and we had a good design for it as well. We changed the audio systems between Mass Effect and Mass Effect 2 so we had a lot more flexibility with the music system. So, we made a lot more different elements that mixed together as you played through. As enemies would fade in and out, the music would switch between these different layers. As a function of that, we had a lot more content to play with.Clint Mansell scored Mass Effect 3. He's known for simultaneously beautiful and melancholy scores. He's done a lot of work with Darron Aronofsky; he scored Requiem for a Dream, The Wrestler, and Black Swan. Was he an obvious choice to score Mass Effect 3 based on his repertoire and tone?After we finished Mass Effect 2, (project director) Casey Hudson and myself sat down and we talked for a long period of time about what we wanted to do musically. We discussed how the third game was going to be different in some ways and what we were going to do with the musical narrative. One of the key things we wanted to do was focus on the emotional connection between the different characters. Really, because of the large-scale galactic war that's going on all around you, sometimes that can be a little too much to take in. We wanted to focus on Shepard's connection with the individuals and the relationships he or she has built with those people. It became important for us to make sure that was something we musically hit.We talked about composers and people we respected and the sort of music we thought captured the emotional resonances. I'd been listening to a lot of Clint Mansell's work at the time, I'm a big fan of his previous scores. We got talking to his people and we had a brief chat with him. He just talked about music and composition in ways that other composers we'd spoken to hadn't; it was an interesting perspective he had on composition. That started the relationship with him. We got some amazing pieces from him and with him, we helped build the musical backbone to the game. It's not just Clint Mansell we've been working with; we have four other composers as well who are all Mass Effect veterans: Chris Lennertz, Sascha Dikiciyan and Cris Velasco worked on the Mass Effect DLC, and Sam Hulick did the first Mass Effect. The soundtrack was pretty evenly split between everyone.There's a real range of content there and it was my and Casey's job to ensure the overall musical narrative had a good flow to it and captured those emotional moments we have. We're really happy the way the soundtrack went. Out of the three games, it's certainly my favorite; it has some awesome stuff in it.After I finished the first Mass Effect, the song that played over the credits, "M4 Part Two," really struck me. It jumps out, it feels like it shouldn't belong with the rest of the score because it's a vocal track - but it fits right in. How did it come about?I didn't work on the first score, but I can give you a bit of info on that. I've been living Mass Effect for the last four years! [laughs] I know a fair bit about the history of Mass Effect despite not being around at the time.The credits pieces have always been a big focus for us. We spend a lot of time thinking of how to wrap the beginnings and the ends up for each of our products. We want to make sure we set the scene right and in a satisfying way for the player. With each of our games, we think about credits music as well, it's not an arbitrary decision for us; there's a lot of effort in picking the right pieces. That piece came from a lot of discussions. Faunts - the band that wrote that song - is actually an Edmonton band, from where BioWare is headquartered. It's great we could keep it local. NEXT: Page 2 » | 计算机 |
2015-48/0317/en_head.json.gz/11155 | Home > Risk Management
OverviewGetting StartedResearchTools & Methods Additional Materials ConsultingOur People
Consider a broad range of conditions and events that can affect the potential for success, and it becomes easier to strategically allocate limited resources where and when they are needed the most.
The SEI has been conducting research and development in various aspects of risk management for more than 20 years. Over that time span, many solutions have been developed, tested, and released into the community. In the early years, we developed and conducted Software Risk Evaluations (SREs), using the Risk Taxonomy. The tactical Continuous Risk Management (CRM) approach to managing project risk followed, which is still in use today—more than 15 years after it was released. Other applications of risk management principles have been developed, including CURE (focused on COTS usage), ATAM® (with a focus on architecture), and the cyber-security-focused OCTAVE®. In 2006, the SEI Mission Success in Complex Environments (MSCE) project was chartered to develop practical and innovative methods, tools, and techniques for measuring, assessing, and managing mission risks. At the heart of this work is the Mission Risk Diagnostic (MRD), which employs a top-down analysis of mission risk.
Mission risk analysis provides a holistic view of the risk to an interactively complex, socio-technical system. The first step in this type of risk analysis is to establish the objectives that must be achieved. The objectives define the desired outcome, or "picture of success," for a system. Next, systemic factors that have a strong influence on the outcome (i.e., whether or not the objectives will be achieved) are identified. These systemic factors, called drivers, are important because they define a small set of factors that can be used to assess a system's performance and gauge whether it is on track to achieve its key objectives. The drivers are then analyzed, which enables decision makers to gauge the overall risk to the system's mission.
The MRD has proven to be effective for establishing confidence in the characteristics of software-reliant systems across the life cycle and supply chain. The SEI has the MRD in a variety of domains, including software acquisition and development; secure software development; cybersecurity incident management; and technology portfolio management. The MRD has also been blended with other SEI products to provide unique solutions to customer needs.
Although most programs and organizations use risk management when developing and operating software-reliant systems, preventable failures continue to occur at an alarming rate. In many instances, the root causes of these preventable failures can be traced to weaknesses in the risk management practices employed by those programs and organizations. For this reason, risk management research at the SEI continues. The SEI provides a wide range of risk management solutions. Many of the older SEI methodologies are still successfully used today and can provide benefits to your programs. To reach the available documentation on the older solutions, see the additional materials.
The MSCE work on mission risk analysis—top-down, systemic analyses of risk in relation to a system's mission and objectives—is better suited to managing mission risk in complex, distributed environments. These newer solutions can be used to manage mission risk across the life cycle and supply chain, enabling decision makers to more efficiently engage in the risk management process, navigate through a broad tradeoff space (including performance, reliability, safety, and security considerations, among others), and strategically allocate their limited resources when and where they are needed the most. Finally, the SEI CERT Program is using the MRD to assess software security risk across the life cycle and supply chain. As part of this work, CERT is conducting research into risk-based measurement and analysis, where the MRD is being used to direct an organization's measurement and analysis efforts. Spotlight on Risk Management
The Monitor June 2009
New Directions in Risk: A Success-Oriented Approach (2009)
A Practical Approach for Managing Risk
A Technical Overview of Risk and Opportunity Management
A Framework for Categorizing Key Drivers of Risk
Practical Risk Management: Framework and Methods | 计算机 |
2015-48/0317/en_head.json.gz/11837 | Software // Information Management
7/12/200501:03 PMDoug HenschenNewsConnect Directly0 commentsComment NowLogin50%50%
In Focus: Up Close With HP's Content Management GuruThere's plenty of hype about "enterprisewide" content management, but few companies have come as close as Hewlett-Packard to taking a truly holistic approach.There's plenty of hype about "enterprisewide" content management, but few companies have come as close as Hewlett-Packard to taking a truly holistic approach. I recently had a chat with Mario Queiroz, HP's vice president of content and product data management, who led the company's three-year effort to rationalize taxonomies, metadata, technologies and management approaches spanning 17 business units. The deployment touches some 85 percent of the products sold by an $83-billion technology giant, yet the practices aimed at efficient content reuse are pertinent to any size organization. Doug Henschen (DH): What was the impetus for HP's enterprise content management (ECM) initiative?
Mario Queiroz (MQ): Things really started with the merger of HP and Compaq. Both companies had content management systems, but they were fairly fragmented, with a lot of departmental deployments. We knew we wanted something streamlined and consolidated, so in May of 2002 we brought together resources and decided to take a strategic look at the problem.
ECM can mean a lot of things, but our project has really been about sales and marketing content. That includes marketing collateral and product content as well as solutions information created by the business units. We're working with more than 3,000 authors across product management and marketing communications (marcom) units, and we're funneling their content--both chunks and larger documents--into a create-once, use-many approach. DH: How does ECM fit into the larger context of information management at HP? MQ: We're part of an E-Business, Customer and Sales Operation group that handles activities including pricing, order management, business intelligence, e-commerce, hp.com, CRM, partner relationship management and other infrastructure areas. Our piece is getting data and information to our customers and internal systems on behalf of the business units--Imaging and Printing, Personal Systems, Servers, Storage, Software, Services and so on. Day-to-day, that means working with the product marketing and engineering organizations at the worldwide and regional levels and funneling their content into the right repositories so we have it structured the right way.
DH: Just what do you mean by structured?
MQ: We've set standards for taxonomy that enable business units to create content efficiently and that drive our strategy of creating content once and using it many times. That's what we do on the "back end." We then offer a subscription service to more than 40 internal publishers, including marcom organizations that print collateral material as well as sales force portals and the hp.com e-commerce Web site that make content available online. The service has a standardized interface for pulling or pushing content, and there's also an interface to the product lifecycle and product data management (PLM/PDM) systems that serve the supply chain. If a product description has to appear on an invoice, for example, we want to make sure that it's the same description that's found on the Web site.
DH: Why is the one-to-many approach so important?
MQ: Without it, we'd have employees in regional markets separately calling up, for example, the marcom people in Boise, Idaho, for information on printing and imaging products. If you start drawing the point-to-point connections between 17 business units and more than 65 localized markets, you quickly get to a very complicated environment. Now that we have a centralized resource, we're showing all our constituents where to go to find the content and how to get it out. DH: What's the nature of the content you're managing? MQ: It varies from sales and marketing literature, which can be very granular, with technical specs such as processor speeds or page-per-minute ratings, to more conventional documents, such as solutions white papers used by the sales force. We didn't think we should take a single approach, so we put much of the granular product content in a product information management solution from Trigo [and since acquired by IBM] and similar solutions in Europe and Asia. That system manages more than one million data elements. For more unstructured content, we use a Documentum repository shared with the support management team. That repository currently manages about 1.8 million documents.
DH: That's a lot of content, yet you say it's all covered by a single taxonomy? MQ: We've internally agreed upon a taxonomy and done work on metadata standards, but we're still overcoming the pain of having multiple taxonomies and marketing hierarchies [stemming in part from the HP/Compaq merger]. Over the last three years, we've been driving region by region and product team by product team to a single taxonomy. The documents within our next-generation Documentum implementation, for example, and the product data elements in our product-information management systems [Trigo/IBM and homegrown EMEA/Asia solutions] are structured according to a standard, seven-layer marketing hierarchy that starts with the general product category and then drills down to product families, models and SKUs underneath each model. We're upgrading to Documentum 5 in part because work had already begun within a different group within the company to configure it to the company standards. Our approach was to complete that work and deploy it aggressively across the corporation. DH: What was the metadata work about?
MQ: We needed terminology standards, particularly for the unstructured documents. We were having a terrible time sticking to one version of a document, so we went through a modeling exercise, breaking down product content into different attributes. One key piece of functionality we exploit is inheritance because it can automatically apply metadata based on parent-child relationships between documents. DH: How do you enforce consistent use of the metadata?
MQ: At the very beginning, three years ago, we worked with each business unit to assign people to help us define the metadata standards and that gave us a lot more leverage to promote adoption. Now, when somebody wants to enter a document into the system, they have to have key data entered, but there's a degree of inheritance. If they want to create a document that applies to all X SKUs, for example, they don't have to enter that metadata over and over again.
DH: Are you exploiting XML to promote content reuse?
MQ: Absolutely. If you don't have documents in a form that makes it easy to pull out chunks of content, you're going to end up with many, many instances of the same content. Our direction is to break documents down and use them interchangeably. Thus far, we've transformed somewhere between one-third and one-half of our documents in the Documentum repository into XML. We're also using [Blast Radius] XMetal [an XML authoring tool] on the creation side, among other selected tools. Right now, the more technical content creators are using XMetal, but we're trying to get some of the marketing types to use our content creation tools. DH: Does XML also figure in translation and localization?
MQ: Yes. We're using Trados' translation memory technology companywide, and we're ramping up more and more of the sales and marketing content [in addition to product manuals, which are managed by another group]. We probably have 10 to 15 percent of our content flowing through translation and localization.
DH: How much has HP spent and what can you say about ROI?
MQ: I can only say that the investment is in the millions. It's a major strategic initiative that's been three years in the making. We've had efficiency gains of about 30 percent per year. As an example, we figure we're saving about $6 million per year just from our translation and localization infrastructure. We've also lowered the cost of developing content for new products by more than half. One of the big reasons we made the investment is that it has allowed the company to scale without having to scale equally in manpower, which we would not be able to afford.
Resources: a. Content in the Age of XML
http://www.intelligententerprise.com/showArticle.jhtml?articleID=163100779
b. Reusing Content Without Starting From Scratch
c. Content Reuse in Practice
http://www.steptwo.com.au/papers/kmc_contentreuse/index.html
d. Introduction to Structured Content Management with XML
http://www.cmswatch.com/Feature/112
The Agile Archive
When it comes to managing data, don�t look at backup and archiving systems as burdens and cost centers. A well-designed archive can enhance data protection and restores, ease search and e-discovery efforts, and save money by intelligently moving data from expensive primary storage systems.
More Reports 2014 Analytics, BI, and Information Management Survey
IT�s tried for years to simplify data analytics and business intelligence efforts. Have visual analysis tools and Hadoop and NoSQL databases helped? Respondents to our 2014 InformationWeek Analytics, Business Intelligence, and Information Management Survey have a mixed outlook. | 计算机 |
2015-48/0317/en_head.json.gz/11952 | Published on MacDevCenter (http://www.macdevcenter.com/)
See this if you're having trouble printing code examples
A DNS Primer
by Dan Benjamin
Editor's note: Sometimes we forget about the gems stashed away in Mac OS X. A great example is the Network Utility application, hidden away in your Utilities folder. In this article, Dan Benjamin shines a light on this handy tool and provides you with a sweet primer to understanding DNS. If you're already an expert, then you might want to hop over to Jason Deraleau's more advanced "Implementing BIND on Mac OS X."
The World's Address
Have you ever wondered, "How does the email I'm sending, or the text on the site I'm reading, find its way from here to there?" Behind the scenes, connecting every machine on the entire Internet is a system called DNS, the Domain Name System, which makes it all possible.
Each system on a modern network is assigned a unique address -- the same way that your home or office has a unique street address. There can only be one building at a specific address at any given time, and the same is true with machines on the Internet.
Finding your way from one building to another is easy, if you have someone to ask who knows the shortest route from here to there. Just like real-world traffic cops, DNS is the Internet's traffic cop. By way of a distributed system of names and numbers, it always knows how to get from one machine to another. No matter where or how far apart they are, it always knows the addresses.
Each time you type a URL into the address bar of your web browser, your computer talks to the DNS server of your Internet service provider (ISP) and asks it how to get to the web site you've specified. Behind the scenes, the DNS server is taking the URL you've given it and translating that into the system's unique address, a twelve-digit number called an IP address. Every machine on the Internet has one, and DNS keeps track of them all.
For most people, remembering numbers isn't easy. It's much easier to remember a domain name, such as www.macdevcenter.com, rather than its IP address, like 208.201.239.37 or 208.201.239.36. DNS is the backbone of the Internet, handling the mapping of IP addresses (like 208.201.239.36) into human-friendly names, like www.macdevcenter.com.
Look Me Up
Each computer on the Internet has its own address, just like a house or building on a street. In this way, the Internet can be thought of as a big city. It can be broken down into smaller neighborhoods, or networks, which are connected to each other by big roads, the Internet pipelines. Finding the way around your own neighborhood isn't too much trouble, but leaving this familiar territory and venturing out onto the big roads without a map can get confusing. The same is true for the Internet and its connected networks -- we need a map, a system to help us find our way around. DNS provides us with that map.
You can see this mapping in action by using the Mac OS X Network Utility, located in the /Applications/Utilities folder. Launch Network Utility, select the Lookup tab, and enter "macdevcenter.com" into the address box. "Macdevcenter.com," just like "apple.com" and "google.com," are domain names, which map to the unique IP addresses that your machines will use to talk to them.
When you click Lookup, you'll see a list of IP addresses in the larger text box below.
This is really just a graphical front end to the UNIX command nslookup, which provides the same information from the command line.
When Dinosaurs Ruled the Earth
Back in the early days, when independent, unconnected networks were the norm, managing servers and systems was a simple task. Users knew how to get around their network. They knew the host systems, how to find them, and what role each played on the network.
As these networks grew in size (because more client computers were attached to the network), and more administrators became responsible for maintaining them, a way to keep track of this information was needed. At first, a simple file containing a list of hosts and their IP addresses was sufficient. This file mapped the "Internet Names" given to network hosts to the IP addresses to which they were assigned. This file would be updated and made available to all Internet administrators, who would then download the file and copy it to all of their servers. When a new server was added or an old one removed, the file would have to be downloaded and copied to each machine again.
As the number of Internet hosts grew, updating and passing a HOSTS.TXT file around manually became too difficult. Users needed a system that was easier to maintain and update. DNS was created to handle the task of disseminating host information automatically.
Each Internet-connected network would be identified by a top-level domain (TLD). Initially, these included .com, .net, .org, .edu, .gov, .mil, and .arpa (as well as a feast of two-letter country zones, such as .us, .uk, etc.). Each connected network, or "zone," (usually a company, college, or military division), would be identified with a name, such as oreilly.com, whitehouse.gov, or stanford.edu. Since that time, many more TLDs have been added, including .cc, .tv, and .biz, and many more. Each network is responsible for maintaining its own DNS servers, at least two machines (a primary and a secondary, or backup, machine in case the first one goes down) that are dedicated to providing this information 24 hours a day, 7 days a week. When you enter a URL into your web browser and press return, your computer talks to your ISP's DNS server, which in turn talks to one of the "top-level" DNS servers. For example, if y | 计算机 |
2015-48/0317/en_head.json.gz/12728 | Home > Packt Publishing > Our Authors Authors
About Us Our Authors Careers with Packt Contact Packt Authors
Do you want to write for Packt?
The Packt Author Website is your resource for discovering what it is like to write for Packt, learning about the writing opportunities currently available, and getting in touch with a Packt editor. Search Authors by Name
A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z 1 2 3 4 5 Next View: 12 24 48 Aurobindo Sarkar
Aurobindo Sarkar is actively working with several start-ups in the role of CTO/technical director. With a career spanning more than 22 years, h...
Sekhar Reddy
Sekhar Reddy is a technology generalist. He has deep expertise in Windows, Unix, Linux OS, and programming languages, such as Java, C# , and Py...
Ahmed Aboulnaga
Ahmed Aboulnaga is a Technical Director at Raastech, a complete lifecycle systems integrator headquartered at Virginia, USA. His professional f...
Harold Dost
Harold Dost III is a Principal Consultant at Raastech who has experience in architecting and implementing solutions that leverage Oracle Fusion...
Arun Pareek
Arun Pareek is an IASA-certified software architect and has been actively working as an SOA and BPM practitioner. Over the past 8 years, he has...
Jos Dirksen
Jos Dirksen has worked as a software developer and architect for more than a decade. He has a lot of experience in a large variety of technolog...
Justin Bozonier
Justin Bozonier is a data scientist living in Chicago. He is currently a Senior Data Scientist at GrubHub. He has led the development of their ...
Andrey Volkov
Andrey Volkov pursued his education in information systems in the banking sector. He started his career as a financial analyst in a commercial ...
Achim Vannahme
Achim Vannahme works as a senior software developer at a mobile messaging operator, where he focuses on software quality and test automation. H...
Salahaldin Juba
Salahaldin Juba has over 10 years of experience in industry and academia, with a focus on database development for large-scale and enterprise a...
Kassandra Perch
Kassandra Perch is an open web developer and supporter. She began as a frontend developer and moved to server-side with the advent of Node.js a...
Saurabh Chhajed
Saurabh Chhajed is a technologist with vast professional experience in building Enterprise applications that span across product and service in...
1 2 3 4 5 Next View: 12 24 48 Contact Us | 计算机 |
2015-48/0317/en_head.json.gz/13929 | JavaIoT.NETCloud
Webmachine and RESTful Web Frameworks with Justin Sheehy
Recorded at:
Interview with Justin Sheehy
by Dio Synodinos
The next QCon is in San Francisco Nov 16-20, 2015. Join us!
Bio Justin Sheehy is the CTO of Basho Technologies, the company behind the creation of Webmachine and Riak. He was a principal scientist at the MITRE Corporation and a senior architect for systems infrastructure at Akamai. At both of those companies he focused on multiple aspects of robust distributed systems, including scheduling algorithms, language-based formal models, and resilience. Sponsored Content
Nuxeo
5 Pillars of API Management
QCon is a conference that is organized by the community, for the community.The result is a high quality conference experience where a tremendous amount of attention and investment has gone into having the best content on the most important topics presented by the leaders in our community.QCon is designed with the technical depth and enterprise focus of interest to technical team leads, architects, and project managers.
Interactive Show All Hide all
Full Page Transcript
My name is Dionysios Synodinos and we are here in QCon London 2011 with Justin Sheehy to talk about RESTful web frameworks. Justin, would you like to tell us a little bit about yourself? I’m the CTO of Basho Technologies. At Basho we produce a lot of open-source software, including Webmachine, which I think we’ll talk about today, including also Riak which we sell a version of - that’s what our business is all about. In addition to the distributed systems and other obvious that I have there, I’m personally very interested in compilers, protocol design, that sort of things.
Your company, Basho Technologies, has been working on Webmachine. Would you like to tell us what Webmachine is? We actually released Webmachine about two and a half years ago and we’ve used it ourselves quite a bit as well as we are now seeing it used all over the place. You can think of it not much like a traditional web framework, but more like a toolkit for really easily building well-behaved HTTP applications and systems. So it’s much more about the web than something that lets you write some objects and somehow expose them to the web. A normal web framework that people usually think of is usually a mostly finished application that most of the way it looks and most of it talks to databases and all those things are 90% defined and you fill in whatever the logic and data model is for your app. This is great, because it does all this work for you and it’s very easy as long as the application you are building is one that looks mostly like the applications that the framework author built and that’s a huge timesaver for people building certain classes of applications. But even though you can build those sorts of direct user-facing CRUD in the database applications and Webmachine and people have build full web applications that way, the best places for Webmachine tend to be middleware or systems that didn’t previously have and HTTP interface, but you found the value in exposing them on the web. Instead of it being you take your object and you somehow cram. The web, you write a little bit of code that describes the web-relevant parts of your system: how can you tell when something was last modified, how can you tell if a resource or a request is well-formed, how do you produce a BODY of a given type. You do that instead of saying "I have all these objects. How do they respond to ‘GET’? How do they respond to ‘PUT’?" and things like that.
What led you to choose such a RESTful architecture over how traditional frameworks work? When we started making Webmachine that was really the purpose of us building it: we saw huge value in the architecture that’s made the web successful. We looked around for a framework or toolkit that would let us take the best advantage of that architecture in exposing some of our systems like Riak with good REST-friendly HTTP compliant interfaces and we didn’t really find anything that would fit our needs. So, those web architecture elements or REST elements can be seen in the ways that companies like Akamai are successful, the ways that people can build mashups out of other systems. All of those things, the loose coupling that you get in an environment like HTTP, the ability to work really well with intermediaries because you have a uniform interface, all of those elements I think are strengths anywhere, but especially when what you are building is a layer that you don’t know what else will be in front of it.
Component-based frameworks that don’t actually fully utilize HTTP have a big collection of tools to support developers. What about RESTful frameworks or Webmachine in particular? How would that go about developing a Webmachine application? It’s certainly true that a lot of the systems, the more traditional object and class and "here are some components" web frameworks have a lot more tools and part of that is because the whole point of a lot of those frameworks is to make you not think any differently about your programs than you always did before. So there are lots of tools for writing a generic, big pile of objects Java programs and you get to reuse those tools if you haven’t changed the way you think at all. With Webmachine on the one hand you don’t have quite as much advantage that you get out of those tools, but on the other hand, you get to write very simple small pieces of code that tend to be purely functional or close to it, but in a very natural way you write a function over resources that says "Here is the end of the entity tag" and so on. It’s very easy in an editor or whatever your favorite editor is to write small self-contained pieces of code that your existing whatever tools you prefer to program in do support very well. The fact that they don’t support the web aspect of this specifically almost doesn’t matter because the web-specific part of a lot of those other component frameworks is the one that they shouldn’t have. What I mean there is in a lot of other frameworks, you have some objects and then you have to define how it responds to a "GET" and how it responds to a "PUT" and you have methods for this, which is completely opposite to what the web is all about. HTTP has already defined what "GET" means and what "PUT" means and if you are writing that from scratch, for everything you do on the web. Writing a new method or a new function for what "GET" means you are going to not do it right. You are going to leave out some things that are correct behavior, you are going to do some things wrong. So it’s those parts of some other frameworks that you need extra-tooling support for whereas Webmachine takes a very different approach and says "These things are at the uniform interface of HTTP. ‘GET’ always works the same, ‘PUT’ always works the same." and so on. So, all you define are a few facts about your resources or objects or whatever you like to call them and Webmachine manages figuring out what status code to respond with and all those sorts of things.
Would you like to go over the usual workflow of a response I would get from Webmachine? You have a big diagram on your side which is basically iterating over the HTTP spec? Yes. That diagram is often used to stand in for Webmachine and help people understand Webmachine, actually directly is reflected in the Webmachine code. That flowchart is what’s called "The Decision Core in Webmachine" and when a request starts on Webmachine, that flowchart is walked and the way it works is that almost every decision in that chart of deciding what status code to respond to or respond with and what other data is going to go along with it, every single one of those decisions has a sane default. You don’t have to fill in all these decisions at all. All you really have to do is say the very least "How does my resource produce a basic representation? What does the HTML or whatever look like of a basic response?" Webmachine will follow all those default choices through the framework and end up at say 200 OK and here is your response. But then, all you have to do, if you want to take more advantage of HTTP, like if you want to have conditional requests or better cacheability or anything like that, all you have to do is write individual functions or methods that overrun the defaults for whichever of the decisions you’d like to do that for. Some of the classic examples that everyone knows about are things like ETags and last modified and all these sorts of things, but there are many you can look at. HTTP I think is a very deceptively simple protocol in that everybody has used it, everybody has written some code that uses it, it feels simple, but it’s actually really complicated when you look through all the details of different status codes can interact and what different headers can mean. Webmachine, through that decision process, manages all of that and encapsulates all of that so you don’t have to write all of that "What do we do if we’ve received this conditional header?" and all these sorts of things. All you have to do is provide some predicates that say "This is a fact about these resources" or "If this thing is true about the request being made, then this other thing is what’s true about the resource". That’s the core model of Webmachine.
How is the testing and debugging experience on a RESTful framework like Webmachine? It’s really great and there are two ways in which I think it’s exceptional: one of them is that same flowchart. It is also a visual debugger and if you turn debugging on a Webmachine it will actually show you the path that was taken through that flowchart for every request that you debugging on, for obviously you don’t want to do this normally for productions systems because that generates a lot of data. And it will not only show you the path, which is very often all you need to figure out what went wrong "This decision over here I went off in this direction I didn’t want to." But it even shows the state of your request before and after each of those decisions and since the only thing that changed that state are the functions that you’ve dropped in at one of those decision points, it’s very easy to inspect where some parameter got set to the wrong value or something like that. A part of what makes that possible is that each of these decisions is a function and here I don’t just mean whatever way you have procedures and methods, but a function in the sense that they should give in the same input to provide the same outputs. That doesn’t only enable the visual debugger, it allows some other people to build some really cool tools. Webmachine has now been cloned in three-four other languages and platforms. One of them is actually on a platform that includes a proofing system, so you can have a Webmachine system, a whole running web application that you can prove some things that are true about it. You do a little bit of work and say "If all these factors are true about my system and requests, I can actually prove that these things will be true about the response." That by itself it’s kind of neat, but a bigger deal is that then you can change something, you can add a new function and see if those things are still true and that’s really powerful. Even if you don’t end up actually using a relatively arcane thing like that, it reflects that you get this very easy composability and you can inspect small parts of a Webmachine application in isolation and you don’t have to trace through all kinds of recursive method calls and things like that. Each piece is very isolated and so debugging tends to be pretty straightforward.
Isolation and good modularity as you are describing now are also considered the driving factors for good security. Certainly. For instance, in one of the various functions that getting called, whether it is_authorized or it is how you describe whether or not to present an HTTP authorization header and things like that, you can have your security model implemented via that function and it doesn’t have to be threaded through where you define "GET" and where you define "PUT" and all these things. You can have, for instance, all of your resources call the same function for determining authorization, whether it’s there or some other part of your system, some other part of your application and because the Webmachine process is all a sequence of individually executed non-interlinked functions that can’t call each other. It’s very easy to inspect these early functions that get called very early in the process and say "If this one returns false, I know I couldn’t possibly ever return to the user." So it makes it simpler. It doesn’t write your security model for you. That’s still up your application, but it makes it very easy to inspect whichever mechanism you choose to use there and be confident that if it works correctly, then you won’t be doing the wrong thing the rest of your application.
What are your future plans for Webmachine? That’s a really interesting question because Webmanchine has sort of reached the point of maturity. It’s no longer as under heavy changes it was for the first year or so that we were writing it and then the next year or so after we first released it. Now it’s been out for a couple of years, it’s mostly mature and we’re still developing it and on those rare occasions when someone finds a new bug or a new corner of it, we still do things and there are a couple of relatively rarely used pieces of the HTTP protocol that aren’t yet in the decision core model that we’ll probably add at some point, especially since it’s more demand for some of them now. For instance, HTTP has had in it for quite some time, since 1.1 and slightly before the notion of the upgrade header which lets you switch to a different protocol and almost nobody ever used it until people started trying to do things like WebSockets and things like that. WebSockets themselves don’t work much like HTTP. It’s just the way to hide the fact that what you are really doing is a plain old socket but you started talking HTTP. Webmachine isn’t going to do all the sorts of things that will turn it into a web socket server or some sort of other generic network server, but they can certainly do the pieces of HTTP that make it easier to build those, like adding an upgrade support and things like that.
Did you find it especially hard to work with specific features of HTTP? Some headers that had ambiguity? Yes. There is an active effort right now in the standards community to improve HTTP standards and I don’t mean to change the content, but to recognize and fix the fact that the HTTP standards in HTTP 1.1 as it exists today is really difficult to decipher. You have to become an expert and you have to read many documents and remember everything because all cross-references are mixed up with each other. Much of the challenge in building Webmachine in the first place is that there are many pieces. For instance, everything about conditional requests and which conditions should be checked first and satisfied first are things that are mostly not very explicit in the protocol, but in many cases there is really only one right way to do it, but you have to figure it out by trial and error.
You mean one right way because...? Because you have consequences that don’t work. If you choose the wrong condition to the test first, you couldn’t possibly satisfy some of the others. Also, we explored when we added all these features that people mostly don’t use in most web frameworks and noticed that because many of those features are so underused, when we let people start using them, we found out that many web clients, whether it’s browsers or otherwise have extremely poor support. For instance, many browsers caching support compared to their content negotiation support: caching in terms of holding onto a local piece of data so that I can show it to you again without going over the network, content negotiation meaning that in HTTP there is fairly complex (and this is actually another answer to your question "What’s hard?"). Content negotiation is tricky to get right in the first place and it’s this notion of a client being able to express preferences, being able to say "What I’d like most for the return value of this document is JSON. But if you don’t have JSON, what I’d like most is HTML. If you don’t have that, I’ll take whatever you give me." That’s sort of the thing.
Which is underutilized in almost all web frameworks. It is. And it’s a very powerful mechanism when you use it well, but it turns out for instance that many browsers ignore some of the content negotiation information with regard to the cache. If you do content negotiation and caching at the same time on the same resources, there are some common browsers that will actually give you the wrong data that doesn’t match the content that is negotiated. We actually early on had to caution some of our users and so on and make it easier to not do things wrong in that way while still giving that power. It’s been a challenge in some cases to implement some pieces like content negotiation, like conditional rights, but also to find the places where even using those things correctly after we implemented them, we don’t always want to encourage in the cases where implementation on the other side, on the client side might not be able to work well.
Or probably on proxies. Exactly - intermediaries. One of the big advantages of using HTTP is being able to have all these intermediaries, but some of them are better than others in different regards.
You have studied HTTP a lot. Are there parts of the specification you would like to see changed because it’s like a 20-25 year old protocol? Not quite that old, but it’s getting there. Right now there is an effort that’s called HTTPbis that I’m very supportive of and I’m very glad it’s being done. The idea is that the next version of the HTTP protocol won’t be about changing and improving the protocol (there are some small things will have to happen), it’s really about fixing the standard’s form, of changing the standard without changing the protocol so that you can divide things up into five or six pieces of the protocol specification that are cross-referenced only when they have to be and they are all mixed up with each other. I think the biggest thing that ought to change about the HTTP specification is the one that’s being worked on right now, which isn’t really any of the semantics of the specification, but the form of it, just to make it easier for other implementers to understand.
How do you see web frameworks and toolkits like Webmachine evolving in the near future? Do you see any major paradigm shift happening besides the fact that people are turning more towards RESTful frameworks? In addition to the recognition that in a lot of cases the REST architecture and the basic web architecture notions are very valuable, I think it’s also the case that some frameworks in your very mainstream common languages are starting to either adopt in the case of existing frameworks or be built around in the cases of new ones, ideas that have historically been more on the fringe - notions of concurrency or notions that may have originated somewhere in functional programming or techniques outside the mainstream of programming. The past couple of years have been popping up more and more in the context of new web frameworks that are not fringe-frameworks, that are in fact the popular ones. I think one of the greatest things we’re seeing now is not so much that there is one set of features that’s being added in the some direction, but that there is a bit more crosspollination and the people from different worlds of programming are actually affecting each other in a beneficial way.
QCon London 2011 Enterprise Architecture Infrastructure Web Services REST SOA Frameworks Web API API Architecture QCon
Microservices: From Theory to Practice
A Guide to REST and API Design
FREE eBook: Migrating to Cloud Native Application Architectures (by O'Reilly) Web APIs: Description, Discovery, and Profiles - Download the InfoQ eMag
A How-to Guide to OAuth & API Security
Related Sponsor
Five Simple Strategies for Securing APIs
Community comments Close
#noprojects - If You Need to Start a Project, You’ve Already Failed
IntelliJ IDEA 15 Released
Google Launches Cloud Datalab Beta
Erik Meijer’s Hacker’s Way
Stop Being Lazy, and Test Your Software (with the Help of Docker)
Redux: An Architectural Style Inspired by Flux
Remotely Exploitable Java Zero Day Exploits through Deserialization
Early View of C# 7 with Mads Torgersen
Angular 2: "We're Really Close"
RoboVM Is No Longer Open Source
The JHipster Mini-book
What Is New on ThoughtWorks Radar Nov 2015
Rebuild or Refactor?
Why Agile Didn’t Work
Netflix.com Adopts Universal JavaScript, Drops Java from Rendering Pipeline
Seven Microservices Anti-patterns
The Java Garbage Collection Mini-Book
IBM to Open Source 50 Projects
Architects Should Code: The Architect's Misconception
The Most Common Reasons Why Software Projects Fail 16 | 计算机 |
2015-48/0317/en_head.json.gz/14440 | Test two: FeaturesDo the bells ring and the whistles whistle? Microsoft Office comprises three core tools: Word, Excel and PowerPoint. Word is the most powerful consumer word processor around. Excel, too, boasts many unique features. Apple's Keynote was streets ahead for a while, but PowerPoint is fighting back, with the ability to edit photos and broadcast presentations online, and a first-class Presenter View. Apple's iWork apps take the pain out of creating attractive documents, particularly at their bargain price. ThinkFree Office aims to replicate Microsoft Office and focuses on accommodating the work patterns of MS Office users. It's a very cost effective alternative. Symphony is a traditional office workhorse. It may not be perfect, but it's stable, reliable and, perhaps most important of all, free. It's just pipped by LibreOffice, though, which also throws in database, drawing and maths capabilities. Aside from word processor, spreadsheet and presentation tools, Google Drive also offers a basic vector drawing program. Test three: Design and use Is it easy to get things done in the suite? The ribbon-based approach of MS Office 2011 can take a little getting used to, but there's a wide range of templates on hand. All the iWork apps come with a generous selection of templates, and it's easy to make your own or find third-party extras. The apps are powerful and a joy to use. Despite aiming to mimic the Office interface, ThinkFree lacks the flair and grace of Microsoft's or Apple's suites. Symphony's interface is well designed, with a Properties panel keeping the most useful options close at hand to help you make quick changes to your formatting without having to dig through the menus. LibreOffice lacks this, but then it doesn't cluster your documents in tabs inside a single window, so you can have multiple files open side by side. The apps in Google Drive are a showcase piece of web design: they render a fullfeatured and very powerful office suite in your browser. In almost every respect they feel like local apps, but you do need an internet connection whenever you want to work. Test four: Connectivity Sync to the cloud, collaboration and more With Google Drive, you can invite colleagues to view or edit documents. It's easy to work on the same document on several different machines. iCloud makes it very easy to edit iWork documents on your Mac and an iOS device, but collaborative working is less well served. You can email documents from each app's Share menu, but since the demise of iWork.com it's more difficult to publish your work online or facilitate group approval. Microsoft hasn't yet produced an iOS version of Office. Document sharing revolves around SkyDrive, which relies on the bundled Document Connection app on the Mac. It's easy-to-use and fuss-free. ThinkFree Office is available for Mac, Windows and Linux, with Android and iOS versions allowing you to manage files stored in a free ThinkFree online account although not edit them remotely. Symphony has no integrated iCloud or SkyDrive equivalent, so the best you can do is save to a shared folder on Dropbox or other third-party service. The same applies to LibreOffice. The winner: Top office suiteIf compatibility is key, then Microsoft Office wins out. You don't get everything on OS X that you get under Windows, but Office for Mac 2011 is a solid, powerful package. Our only qualm is the price. Even the Home and Student edition now tips the scales at £110. You can cut costs with the Office365 rental model, which starts at £10 per month per user for small businesses, and £7.99 a month/£80 a year for home users. This lets you install all four Office apps on up to five Macs or PCs, and gives you 20GB of SkyDrive storage. Although this is good value, after six months you've paid more than you would if you'd bought the three iWork apps outright, and you'll still have to keep on paying to keep on working. 1
Previous Page Best productivity suites for your Mac | 计算机 |
2015-48/0318/en_head.json.gz/352 | Platform: x86_64
The Fedora Project is an openly-developed project designed by Red Hat, open for general participation, led by a meritocracy, following a set of project objectives. The goal of The Fedora Project is to work with the Linux community to build a complete, general purpose operating system exclusively from open source software. Development will be done in a public forum. The project will produce time-based releases of Fedora about 2-3 times a year, with a public release schedule. The Red Hat engineering team will continue to participate in building Fedora and will invite and encourage more outside participation than in past releases. Fedora 15, a new version of one of the leading and most widely used Linux distributions on the market, has been released. Some of the many new features include support for Btrfs file system, Indic typing booster, redesigned SELinux troubleshooter, better power management, LibreOffice productivity suite, and, of course, the brand-new GNOME 3 desktop: "GNOME 3 is the next generation of GNOME with a brand new user interface. It provides a completely new and modern desktop that has been designed for today's users and technologies. Fedora 15 is the first major distribution to include GNOME 3 by default. GNOME 3 is being developed with extensive upstream participation from Red Hat developers and Fedora volunteers, and GNOME 3 is tightly integrated in Fedora 15." manufacturer website
1 dvd for installation on an 86_64 platform back to top | 计算机 |
2015-48/0318/en_head.json.gz/935 | Posted Teenage hacker sentenced to six years without Internet or computers By
Andrew Kalinchuk
Cosmo the God, a 15-year-old UG Nazi hacker, was sentenced Wednesday to six years without Internet or access to a computer.
The sentencing took place in Long Beach, California. Cosmo pleaded guilty to a number of felonies including credit card fraud, bomb threats, online impersonation, and identity theft.
Cosmo and UG Nazi, a group he runs, started out as a group in opposition to SOPA. Together with his group, Cosmo managed to take down websites like NASDAQ, CIA.gov, and UFC.com among others. Cosmo also created custom techniques that gave him access to Amazon and PayPal accounts.
According to Wired’s Mat Honan, Cosmo’s terms of his probation lasting until he is 21 will be extremely difficult for the young hacker:
“He cannot use the internet without prior consent from his parole officer. Nor will he be allowed to use the Internet in an unsupervised manner, or for any purposes other than education-related ones. He is required to hand over all of his account logins and passwords. He must disclose in writing any devices that he has access to that have the capability to connect to a network. He is prohibited from having contact with any members or associates of UG Nazi or Anonymous, along with a specified list of other individuals.”
Jay Leiderman, a Los Angeles attorney with experience representing individuals allegedly part of Anonymous also thinks the punishment is very extreme:
“Ostensibly they could have locked him up for three years straight and then released him on juvenile parole. But to keep someone off the Internet for six years — that one term seems unduly harsh. You’re talking about a really bright, gifted kid in terms of all things Internet. And at some point after getting on the right path he could do some really good things. I feel that monitored Internet access for six years is a bit on the hefty side. It could sideline his whole life–his career path, his art, his skills. At some level it’s like taking away Mozart’s piano.”
There’s no doubt that for Cosmo, a kid that spends most of his days on the Internet, this sentence seems incredibly harsh. Since he’s so gifted with hacking and computers, it would be a shame for him to lose his prowess over the next six years without a chance to redeem himself. Although it wouldn’t be surprising if he found a way to sneak online during his probation. However, that kind of action wouldn’t exactly be advisable. It’s clear the FBI are taking his offenses very seriously and a violation of probation would only fan the flames.
Do you think the sentencing was harsh or appropriate punishment for Cosmo’s misdeeds? | 计算机 |
2015-48/0318/en_head.json.gz/2277 | This is where Dave writes and updates Scripting News. Home
Scripting.Com
RSS History
SOAP Prior art as a design method
Wed, Jun 18, 2003; by Dave Winer.
Anyone who has worked with me knows how much I value prior art. Here's how it goes. We're designing a feature, getting ready to implement it. At some point in the design process we ask "Has anyone else done this?" and if so, we consider doing it that way. Sometimes there are two or three ways to do something, but usually, if something has been proven to work with users, or on current hardware, or somehow has connected with reality and worked, there's usually one best way to do whatever it is. Whoever did it first probably had to iterate, to try one approach and fail, then try another, or see a different way later and re-do it. By respecting prior art you can save all that time. But there's another even better reason to respect prior art. If we do it the same way -- instead of two ways to do something, there's only one. That means that any software that worked with the other guy's product works with ours. It means that users who know how to use the other product also know how to use ours. It's respectful in the true sense of the word. We listened to you, we thought you were right, so we did it your way. You can see in a narrative from Evan Williams that he was surprised when the UserLand design algorithm kicked out the obvious answer -- clone the Blogger API even though we already had ManilaRPC which was broader. Only one way to do something is much better than "I have a better way." There wasn't much of an installed base on ManilaRPC outside UserLand. We were aiming bigger. (And we continued to support the original interface and still do to this day. I'm using it to write this story.)
I became a believer in prior art when I made my first end-user product, a simple outliner for Unix. I followed the command structure and style of the Unix line editor (god I forget its name). Later, when I was making Apple II and IBM PC software, I followed the user interface popularized by Mitch Kapor in VisiPlot and then Lotus 1-2-3. Macintosh was the crowning achievment in design-by-prior-art. They actually had a book of rules called the User Interface Guidelines that told you how you had to do it. Were the criteria subjective? You bet. Did it work? Yes it did. Was it worth the pain? Of course. It made the machine work better for users, and developers. A user who previously could only use one or two Apple II or IBM PC software products now could use five or more Mac products because they didn't needlessly reinvent things that didn't need re-inventing. And of course the Mac itself, an improved clone of the Xerox Star is a great demo of prior art.
PS: Another way to say prior art is "only steal from the best."
© Copyright 1997-2013 UserLand Software.
Email: [email protected]. | 计算机 |
2015-48/0318/en_head.json.gz/4870 | John Wallin ndrew Jones was deep within his creative space while in production for the d’artiste: Concept Art book. The Massive Black luminary discusses his history, ideas and plans for the future. His is an exciting journey, a road less travelled, but one well worth the trip. d'artiste: Concept Art presents the techniques of leading concept artists Viktor Antonov, George Hull, Andrew Jones and Nicolas “Sparth” Bouvier. In this masterclass tutorial book, these four authors bring readers through concept art techniques used to create environments, characters and machinery for film, television and video games.
Lesser Evil My parents were painters and while in preschool, I painted a picture of a caterpillar which the teacher thought was something really special. So she encouraged my parents to get me into private lessons and that started my life as an artist. I’ve been painting ever since. I’ve done a lot of life drawings, and this is something I feel I was influenced heavily by. There was a lot of time to do life drawing because I lived out on a farm, and I drew stuff for entertainment. I really had a lot of time on my hands.
I went to a school called Ringling School of Art and Design in Sarasota, Florida where I gained a Bachelor of Fine Arts degree in Computer Animation. At Ringling, I was learning computer animation and Maya, skills that I used to translate into being a more efficient concept artist. At the Boulder Academy of Fine Arts, I received classical academic artistic training from master Elvie Davis, which included cast and figure drawing, life painting, sight-sizing, lots of color and shadow theory. I took a semester off and went to medical school to dissect cadavers. I used that time to really get in close to these cadavers and learn anatomy. You don’t really forget it once you cut open somebody’s back. It’s a great way of burning images into your mind.
Die SF Self-portrait #938 I was exposed to the digital side of the arts pretty early. I remember fooling around with Painter 1.0 when it was sold in something like a paint can. I was playing with it before Wacom came along so I was only using a mouse. I remember experimenting with it, but it wasn’t as satisfying as the ‘traditional way’. At the time, I was really into markers, and pencils, so it was hard for the original programs to compete with that. I remember when the first Wacom tablet came out, I was experimenting with that towards the end of 1998. At that point, I really got stuck into Painter with the Wacom. I never really looked back, because I was sold on the digital medium from then on.
During my schooling, I took a job with one of my art school peers, Jason Wen. I was working on a personal project of his called ‘F8’. He saw some of my concept work and wanted me to collaborate. This was my first job, working as a Concept Artist, working solo on the digital direction of a film. Doing the sceneries, characters, vehicles and guns—practically the whole universe of elements. ‘F8’ has become a cult classic, initially shown at SIGGRAPH when it came out. That was really a great experience because it was the first time that I was able to see my concepts actually in 3D, after seeing the updates and the models in LightWave. I remember getting really excited about the potential of drawing something and having someone else modeling it and taking it to the next level.
Painter seems to have a certain spirit to it. It’s definitely the most intuitive of programs. There is an element of chaos and unpredictability to Painter. I’m just that much more comfortable using Painter than any other program.
In fact, I use both Painter and Photoshop in conjunction for the majority of my time but when I’m painting and drawing, I always have Painter there with me.
Hunger Edge on entropy Copyright © 2003 - 2015 Ballistic Media Pty. Ltd. Privacy Policy | Shipping | Refunds | Contact | 计算机 |
2015-48/0318/en_head.json.gz/7498 | / root / Linux Applications and Software / Database
MySQL 4.0
you save:7%
MySQL AB has become the the most popular open source database and the fastest growing database in the industry. This is based on its dedication to providing a less complicated solution suitable for widespread application deployment at a greatly reduced TCO.
MySQL offers several key advantages:
Reliability and Performance. MySQL AB provides early versions of all its database server software to the community to allow for several months of "battle testing" by the open source community before it deems them ready for production use.
Ease of Use and Deployment. MySQL's architecture makes it extremely fast and easy to customize. Its unique multi-storage engine architecture gives corporate customers the flexibility they need with a database management system unmatched in speed, compactness, stability, and ease of deployment.
Freedom from Platform Lock-in. By providing ready access to source code, MySQL's approach ensures freedom, thereby preventing lock-in to a single company or platform.
Cross-Platform Support. MySQL is available on more than twenty different platforms including major Linux distributions, Mac OS X, UNIX and Microsoft Windows.
Millions of Trained and Certified Developers. MySQL is the world's most popular open source database, so it's easy to find high-quality, skilled staff.
Powerful, Uncomplicated Software
MySQL has the capabilities to handle most corporate database application requirements with an architecture that is extremely fast and easy to use.
CDROM
MySQL Manual back to top | 计算机 |
2015-48/0318/en_head.json.gz/7662 | Visual Art Director
Visual Walt Disney
Visual personal
3D Set
About Olivier
Resume - CV
Contact - Press
Olivier Adam is currently working at Illumination Macguff in Paris (FR) as Art director for the preproduction of "Despicable Me 3" directed by Pierre Coffin, Eric Guillon and Kyle Balda. He finished the production of "Minions" last december, also directed by Pierre Coffin and Kyle Balda.
Olivier Adam has traveled all over the world to lend his talents, he was Set Design Artistic Director in the movie "Arthur Christmas" for the studio Aardman in Bristol (UK) as well as for the studio Imageworks Sony in Los Angeles (US), directed by Sarah Smith and Barry Cook. He was also Supervising Art Director and Set Dressing Supervisor 3D for the movie "The Tale of Despereaux", a Universal production in London (UK) directed by Sam Fell and Robert Stevenhagen. He had worked previously as a Layout director and Workbook director for DisneyToon Studio in Sydney (Australia) and as a Layout Supervisor for Walt Disney Feature Animation (FR).
*webconcept & webdesign by Lorette* | 计算机 |
2015-48/0318/en_head.json.gz/9170 | ACM Books and Computing History
NEWSLETTER Next Article
Previous Article Table of Contents
CBI Home ACM Books and Computing History ACM Books is a new publishing venture launched by the Association for Computing Machinery in partnership with Morgan & Claypool Publishers. It covers the entire range of computer science topics—and embraces the history of computing as well as social and ethical impacts of computing. I was delighted to accept a position on the Editorial Board with responsibilities for recruiting in this broad area. I looked at it this way: history of computing plus social and ethical impacts of computing—what couldn’t we publish under this rubric?
In the Spring 2015 CBI Newsletter we featured the first computing history book in the ACM Books series. Software entrepreneur John Cullinane assembled a unique memoir–oral history collection based on CBI oral histories with his notable colleagues, his sources of inspiration, and his own oral history. Smarter Than Their Machines: Oral Histories of Pioneers in Interactive Computing (2014) contains John’s personal viewpoint on the emergence of interactive computing, involving time-sharing, databases, and networking—including excerpts from 12 CBI oral histories. We mentioned that the unusually quick production cycle of 2.5 months allowed the volume to be out in time for Christmas last year.
Bernadette Longo’s Edmund Berkeley and the Social Responsibility of Computer Professionals appeared this fall. In her research Bernadette extensively used the Edmund Berkeley papers at CBI, as well as archival sources at Harvard University, Berkeley’s FBI file, and several collections at the Smithsonian including the Grace Murray Hopper papers. Last month we enjoyed a publication countdown. The book was first available on the ACM Digital Library and Morgan & Claypool websites and, a couple weeks later, on Amazon, Barnes & Noble, and the usual “fine bookstores.”
Berkeley will be familiar to CBI Newsletter readers for multiple reasons. He was an early advocate of computing within the insurance industry, and so figures in Joanne Yates’ Structuring the Information Age: Life Insurance and Technology in the Twentieth Century (Johns Hopkins 2005). His notable book Giant Brains, or Machines That Think published in 1949 was the very first book on computing written for a popular audience, an emphasis that Berkeley kept throughout his career in selling inexpensive computer “kits” and publishing the journal Computers and Automation (1951-73). His strongly voiced anti-military stance across these years did not always endear him to Grace Hopper and other computer professionals whose careers were in the military services. The book creates a memorable portrait of a quirky and yet unforgettable person.
The third volume in the ACM computing history series is just published. In 2013 Robin Hammerman and Andy Russell, along with their colleagues at Stevens Institute of Technology, hosted a conference to celebrate the many facets of Ada Lovelace, including her contributions to early computing (with Charles Babbage), her notable place in Victorian culture (her father was the noted poet Lord Byron), her iconic status within today’s contemporary “steampunk” movement, and her enduring inspiration for women in computing. It is quite a legacy over two centuries. Or, to be precise, exactly 200 years since the bicentennial of her birth is coming soon in early December 2015. We aim to have Ada’s Legacy contribute a bit to the burgeoning media interest in her accomplishments and career. At the least the volume covers an immense and varied terrain in appraising Ada’s legacy: we know of no other 19th century woman who, in addition to significant mathematical attainments and early computer programming, has a programming language named for her (covered in the book’s chapters 3-5), figures prominently in a contemporary science-fiction literary genre (in chapters 8-10), and inspires contemporary computing reform. The book contains Ada’s “Notes to the Menabrea Sketch,” where her contributions to mathematics and computing are set down for readers to examine themselves, as well as historical information on the Ada computer language. Sydney Padua, author of the recent graphic novel, The Thrilling Adventures of Lovelace and Babbage: The (Mostly) True Story of the First Computer (2015), contributes three original drawings including the one that graces the book’s cover.
It so happened that these three books have meaningful connections to the Charles Babbage Institute, but this is no requirement. Other volumes in preparation include a technically oriented history of software and a history of early networking. Please give me a holler if you have an idea that might become an ACM Book.
Thomas J. Misa
Back to Top | Next Article | Previous Article
Last modified on November 20, 2015. Questions, comments to [email protected] | 计算机 |
2015-48/0318/en_head.json.gz/9280 | Posted Ouya: ‘Over a thousand’ developers want to make Ouya games By
Check out our review of the Ouya Android-based gaming console.
Even after the relatively cheap, Android-based Ouya console proved a massive success on Kickstarter (the console was able to pull in nearly $8.6 million from investors despite having an initial goal of only $960,000), pundits and prospective owners of the new gaming machine loudly wondered how well it would be able to attract developers who would otherwise be making games for the Xbox 360, iPhone or PC. Assuming you believe official statements made by the people behind the Ouya console, there is nothing to worry about on that front.
“Over a thousand” developers have contacted the Ouya creators since the end of their Kickstarter campaign, according to a statement published as part of a recent announcement on who will be filling out the company’s leadership roles now that it is properly established. Likewise, the statement claims that “more than 50” companies “from all around the world” have approached the people behind Ouya to distribute the console once it is ready for its consumer debut at some as-yet-undetermined point in 2013.
While this is undoubtedly good news for anyone who’s been crossing their fingers, hoping that the Ouya can make inroads into the normally insular world of console gaming, it should be noted that while these thousand-plus developers may have attempted to reach the Ouya’s creators, the company offers no solid figures on how many of them are officially committed to bringing games to the platform. That “over a thousand” figure means little if every last developer examined the terms of developing for the Ouya and quickly declined the opportunity in favor of more lucrative options. We have no official information on how these developer conversations actually went, so until we hear a more official assessment of how many gaming firms are solidly pledging support to the Ouya platform, we’ll continue to harbor a bit of cynicism over how successful this machine might possibly be.
As for the aforementioned personnel acquisitions, though they’re less impressive than the possibility that thousands of firms are already tentatively working on games for the Ouya, they should offer a bit more hope that the company making the console will remain stable, guided by people intimately familiar with the gaming biz. According to the announcement, Ouya has attracted former IGN president (and the first investor in the Ouya project) Roy Bahat to serve as chairman of the Ouya board. Additionally, the company has enlisted former EA development director and senior development director for Trion Worlds’ MMO Rift, Steve Chamberlin, to serve as the company’s head of engineering. Finally, Raffi Bagdasarian, former vice president of product development and operations at Sony Pictures Television has been tapped to lead Ouya’s platform service and software product development division. Though you may be unfamiliar with these three men, trust that they’ve all proven their chops as leaders in their respective gaming-centric fields.
Expect to hear more solid information on the Ouya and its games line up as we inch closer to its nebulous 2013 release. Hopefully for the system’s numerous potential buyers, that quip about the massive developer interest the console has attracted proves more tangible than not. | 计算机 |
2015-48/0318/en_head.json.gz/9863 | PHP Fog Announces $1.8 Million Investment, Led by Madrona Venture Group Share Article
PHP Fog, a provider of easy deployment and infinite scalability of PHP-based web apps announced today it has received $1.8 million in financing to grow the company and support the PHP developer community. Now we have the resources to scale quickly and provide a world-class service to PHP developers.
Portland, OR (PRWEB)
PHP Fog, a PHP-based Platform-as-a-Service (PHP PaaS) cloud computing platform today announced that is has secured $1.8 million in financing from Madrona Venture Group, First Round Capital, Founder’s Co-Op, and other prominent angel investors. PHP is the most popular web development language in the world, with millions of active developers and tens of millions of PHP-based sites already in deployment. PHP Fog is the only company offering effortless deployment and infinite scaling of PHP applications in the cloud. The company offers one-click deployments for many popular PHP apps and frameworks including WordPress, Drupal, Kohana, Zend, and SugarCRM. Also, with PHP Fog’s N-tier scaling, customers no longer have to worry about reliability, since every part of their web stack has built-in redundancy and failover. The company is currently in private beta but expects to launch publicly in the first half of 2011. “PaaS is a red-hot sector and recent industry developments have validated the market need for cloud application platforms that help customers build, deploy and scale web apps,” said Tim Porter a Partner at Madrona Venture Group who will join the company’s board. “PHP Fog identified an important customer need and developed an innovative, differentiated service with an all-star team that uniquely addresses it. Great companies start with great people and we are excited to help them grow PHP Fog to a large company.” "PHP Fog is precisely the kind of company - and Lucas is exactly the kind of entrepreneur - that Founder’s Co-op exists to support,” said Chris DeVore from Founder’s Co-op, a seed-stage investment fund based in Seattle. “He's a developer's developer, tackling an acute pain that he and his peers experience in scaling their own software businesses."
The company was founded in 2010 by Lucas Carlson, a PHP developer for over 8 years and one of the leading Ruby developers in the world. Prior to starting PHP Fog, Lucas was engineer #1 at Mog and helped to build and scale the site to tens of millions of monthly pageviews. Lucas also wrote the Ruby Cookbook for O’Reilly and is active in the open source community. “I couldn’t be more excited about all the strong investors the company has been able to attract,” said Lucas Carlson. “We are in a space that is changing every day, but the opportunity is massive. Now we have the resources to scale quickly and provide a world-class service to PHP developers.”
About PHP Fog
PHP Fog provides simple one-click installations of some of the most popular PHP applications out there. It just works. You get full access to the source code of your PHP application through git. Push your code changes to us and we will publish these changes to the cloud. We handle deployment, failover, database maintenance, scaling, and all the other plumbing that can take an army of programmers and systems administrators to handle. Automatically. You pay only for what you use.
About Madrona Venture Group
Madrona Venture Group (http://www.madrona.com) has been investing in early-stage technology companies in the Pacific Northwest since 1995 and has been privileged to play a role in some of the region's most successful technology ventures. The firm invests across the information technology spectrum, including consumer Internet, commercial software and services, digital media and advertising, networking and infrastructure, and wireless. Madrona currently manages nearly $700 million and was an early investor in companies such as Amazon.com, Isilon Systems, World Wide Packets, iConclude, Farecast.com and ShareBuilder.
Lucas Carlson
PHP Fog (888) 974-7364 | 计算机 |
2015-48/0318/en_head.json.gz/11223 | BIS Privacy Policy Statement | Print | The kinds of information BIS collects
Automatic Collections - BIS Web servers automatically collect the following information:
The IP address of the computer from which you visit our sites and, if available, the domain name assigned to that IP address;
The type of browser and operating system used to visit our Web sites;
The date and time of your visit;
The Internet address of the Web site from which you linked to our sites; and
The pages you visit.
In addition, when you use our search tool our affiliate, USA.gov, automatically collects information on the search terms you enter. No personally identifiable information is collected by USA.gov.
This information is collected to enable BIS to provide better service to our users. The information is used only for aggregate traffic data and not used to track individual users. For example, browser identification can help us improve the functionality and format of our Web site.
Submitted Information: BIS collects information you provide through e-mail and Web forms. We do not collect personally identifiable information (e.g., name, address, phone number, e-mail address) unless you provide it to us. In all cases, the information collected is used to respond to user inquiries or to provide services requested by our users. Any information you provide to us through one of our Web forms is removed from our Web servers within seconds thereby increasing the protection for this information.
Privacy Act System of Records: Some of the information submitted to BIS may be maintained and retrieved based upon personal identifiers (name, e-mail addresses, etc.). In instances where a Privacy Act System of Records exists, information regarding your rights under the Privacy Act is provided on the page where this information is collected.
Consent to Information Collection and Sharing: All the information users submit to BIS is done on a voluntary basis. When a user clicks the "Submit" button on any of the Web forms found on BIS's sites, they are indicating they are aware of the BIS Privacy Policy provisions and voluntarily consent to the conditions outlined therein.
How long the information is retained: We destroy the information we collect when the purpose for which it was provided has been fulfilled unless we are required to keep it longer by statute, policy, or both. For example, under BIS's records retention schedule, any information submitted to obtain an export license must be retained for seven years.
How the information is used: The information BIS collects is used for a variety of purposes (e.g., for export license applications, to respond to requests for information about our regulations and policies, and to fill orders for BIS forms). We make every effort to disclose clearly how information is used at the point where it is collected and allow our Web site user to determine whether they wish to provide the information.
Sharing with other Federal agencies: BIS may share information received from its Web sites with other Federal agencies as needed to effectively implement and enforce its export control and other authorities. For example, BIS shares export license application information with the Departments of State, Defense, and Energy as part of the interagency license review process.
In addition, if a breach of our IT security protections were to occur, the information collected by our servers and staff could be shared with appropriate law enforcement and homeland security officials.
The conditions under which the information may be made available to the public: Information we receive through our Web sites is disclosed to the public only pursuant to the laws and policies governing the dissemination of information. For example, BIS policy is to share information which is of general interest, such as frequently asked questions about our regulations, but only after removing personal or proprietary data. However, information submitted to BIS becomes an agency record and therefore might be subject to a Freedom of Information Act request.
How e-mail is handled: We use information you send us by e-mail only for the purpose for which it is submitted (e.g., to answer a question, to send information, or to process an export license application). In addition, if you do supply us with personally identifying information, it is only used to respond to your request (e.g., addressing a package to send you export control forms or booklets) or to provide a service you are requesting (e.g., e-mail notifications). Information we receive by e-mail is disclosed to the public only pursuant to the laws and policies governing the dissemination of information. However, information submitted to BIS becomes an agency record and therefore might be subject to a Freedom of Information Act request.
The use of "cookies": BIS does not use "persistent cookies" or tracking technology to track personally identifiable information about visitors to its Web sites.
Information Protection: Our sites have security measures in place to protect against the loss, misuse, or alteration of the information on our Web sites. We also provide Secure Socket Layer protection for user-submitted information to our Web servers via Web forms. In addition, staff is on-site and continually monitor our Web sites for possible security threats.
Links to Other Web Sites: Some of our Web pages contain links to Web sites outside of the Bureau of Industry and Security, including those of other federal agencies, state and local governments, and private organizations. Please be aware that when you follow a link to another site, you are then subject to the privacy policies of the new site.
Further Information: If you have specific questions about BIS' Web information collection and retention practices, please use the form provided.
Policy Updated: April 6th, 2015 10:00am | 计算机 |
2015-48/0319/en_head.json.gz/279 | Rss Feeds Welcome, Guest Sign In
Follow us on Headlines
Text Size + − Breach Report: Who Got Hacked in 2010?
By Christina Volpe, Associate Editor
| January 25, 2011 It’s the conversation that no restaurant or hotel owner wants to have with one of its customers: “my credit card has some mysterious charges on it, and I believe that they stem from your business.” That’s exactly what happened to Blanca Aldaco, owner of Aldaco’s Mexican Cuisine at Stone Oak in San Antonio, Texas.
“I remember everything just like it happened today,” says Aldaco, as she recalls the day when a customer came into the restaurant to inform her of some unauthorized charges to his card. “I listened to his speak and then I asked him, ‘what makes you think it came from here?’ And he said, ‘well this is the only place that I used this card.” By noon on the following Saturday, the restaurant had received three similar calls from customers. “By Sunday, we had probably 70 calls,” says Aldaco.
When the Secret Service and the police department showed up at the restaurant that Wednesday morning, Aldaco was already hearing of charges from as far away as Turkey and Ireland. “It was global; it wasn’t just in the United States,” she says. In the United States, the majority of the stolen card numbers were being used at Walmart and Target. When all was said and done, roughly 5,100 credit cards were compromised (although not all of them had fraudulent charges), as a result of an overseas hacker who infiltrated the restaurant’s network with a sophisticated malware between March 21 and May 17, 2010. “Basically what they did was install a malware memory dumper, so every time we swiped, it was going into an imaginary pocket and it would stay there until they extracted it,” says Aldaco.
Guest response
But how would such an incident affect restaurant patrons? A September 2010 telephone survey of more than 1,000 U.S. adults by Harris Interactive on behalf of Cintas found that 76 percent would not return to a restaurant that they ate at if their personal information was stolen. Yet despite this data breach, Aldaco’s says that is has experienced the exact opposite due to its openness with its customers. To inform customers, the restaurant released a statement on its website, posted updates on its Facebook page, and maintained an active conversation with the media. Even though the restaurant had been breached, customers continued to return. “’We know [about the breach], we’re still here to support you.’ That’s what I kept hearing,” says Aldaco in reference to her conversations with guests over the following weeks. “You can’t cover it up. Speak with clients and be honest. Let people know what you are going through, you were victimized as well.”
Aldaco also related the frustration that she felt about not knowing enough about PCI. “There is no education, nobody tells you about this until it explodes in your face,” she says. “Make sure that you don’t have any stored data, call your POS seller and make sure that you are up-to-date. And if you are lucky enough to have an IT guy, get going.” You are not alone
Although Aldaco’s brush with a data breach was frustrating for the restaurant’s management staff and its patrons alike, their story is not anything new to hospitality. The hospitality industry has long been a victim of data breaches for a number of reasons. Here are seven other hospitality organizations that suffered the same fate as Aldaco’s last year:
Wyndham Hotels & Resorts: In February 2010, Wyndham Hotels & Resorts issued an open letter to their guests informing them that certain Wyndham brand-franchised and managed hotel computer systems had been compromised by a hacker, resulting in the unauthorized acquisition of customer names and credit card information. The hacker was able to infiltrate central network connections to move information to an off-site URL before the hotel company discovered the intrusion in late January 2010. The breach was believed to have occurred between late October 2009 and January 2010. Julie’s Place: This Tallahassee eatery was identified by the Leon County Sherriff’s Office Financial Crimes Unit as the source of card compromises for more than 100 consumer accounts over the summer of 2010. It is estimated that the incident resulted in $200,000 is fraud losses. According to BankInformationSecurity.com, the hackers targeted the restaurant’s point of sale system, somewhere between the network and the restaurant’s processor. Destination Hotels & Resorts: Back in June, Destination Hotels & Resorts reported that the credit cards of guests who stayed at 21 of the company’s hotels may have been compromised. In a press release, the company said that it uncovered a malicious software program that was inserted into its credit card system from a remote source, affecting only credit cards that were physically swiped. HEI Hospitality: In September 2010, HEI Hospitality, owner and operator of a number of Marriott-branded and Starwood Hotels & Resorts, informed the New Hampshire Attorney General’s Office and its customers of a compromise to its IT systems, occurring from March 25-April 17. HEI sent letters to some 3,400 customers, informing them that their credit cards may have been compromised. According to DataBreaches.net, the firm informed customers that they believed that the point of sale system used in a number of its hotels’ restaurants, bars, and gift shops, as well s the information management system used at check-in, were illegally accessed and transaction were intercepted.
Taco Bell:In late September, The Grand Rapids Pressreported on a credit card skimming scheme that that involved Taco Bell employees and two other individuals, Rodger Torres and Onil Rivas-Perez. Police say that the men used the card numbers to purchase pre-paid Visa gift card from three Meijer stores.
Broadway Grill: More than 1,000 credit and debit cards may have been compromised in an attack that occurred in late October on the Seattle Capitol Hill area restaurant, Broadway Grill. Officials say that the credit card data was stolen on October 22, and that the forensic trail leads overseas. The hacker, who was able to access the restaurant’s point of sale system. McDonald’s: In early December, McDonald’s said that some of its customers may have been exposed during a data security breach when a hacker gained access to a third-party-managed database containing customer information, including: e-mail, phone numbers, addresses, birthdays and more. According to the company’s website, customers’ credit card information and Social Security numbers were not compromised.
Data Breach Legislation 101: Top Principles for Mitigating Brand Risk RELATED LINKS
Avoid 2009's Hospitality Hacker Blitz: 15 Need-to-Know Methods of Attack Is PCI Enough? A 'Breach' in Customer Loyalty Hypercom Steps up Payment Card Industry's Attack on Fraud with New Initiatives to Protect Businesses Please enable JavaScript to view the comments powered by Disqus.
Text Size + − MOST READ STORIES
POS Software Trend Report 2015
Three Mobile Factors Reshaping the Travel Industry
Benefits of Online Ordering Go Beyond Increased Sales
2015 Lodging Technology Study
2014 POS Software Trends
In-Room Tech
POS Hardware & Software
Loyalty & CRM
ht events
Follow us on Editorial Contact
About HT
Edit Calendar
EKN Research
Edgell Mobile Services
Edgell Marketing Services
Consumer Goods Technology
Retail Info Systems News
Vertical Systems ReSeller
All materials on this site copyright© Edgell Communications.All rights reserved. | 计算机 |
2015-48/0319/en_head.json.gz/505 | The Book — Extras
kernelthread.com
Understanding Apple's Binary Protection in Mac OS X
© Amit Singh. All Rights Reserved.
Written in October 2006
With the advent of Intel-based Macintosh computers, Apple was faced with a new requirement: to make it non-trivial to run Mac OS X on non-Apple hardware. The "solution" to this "problem" is multifaceted. One important aspect of the solution involves the use of encrypted executables for a few key applications like the Finder and the Dock. Apple calls such executables apple-protected binaries. In this document, we will see how apple-protected binaries work in Mac OS X.
Relax. Please don't send me a note telling me about your "friend" who has been "easily" running Mac OS X on the laundry machine. When I say "non-trivial," we're not talking about mathematical impossibility, etc.
Note that besides hindering software piracy, there are other scenarios in which encrypted binaries could be desirable. For example, one could turn the requirement around and say that a given system must not run any binaries unless they are from a certain source (or set of sources). This could be used to create an admission-control mechanism for executables, which in turn could be used in defending against malware. In a draconian managed environment, it might be desired to limit program execution on managed systems to a predefined set of programs—nothing else will execute. In general, a set of one or more binaries could be arbitrarily mapped (in terms of runnability) to a set of one or more machines, possibly taking users, groups, and other attributes into account. I must point out that to create such a mechanism, one doesn't have to use encrypted binaries. Furth | 计算机 |
2015-48/0319/en_head.json.gz/736 | Accounting Software Lotus Symphony Office Suite available for free from IBM
May 14th 2008 0 Internet users may now access IBM's Lotus Symphony office suite for free from IBM's Web site. The program, which permits the user to create documents, spreadsheets and presentations from software stored on IBM's servers, is available for both Microsoft Windows and Linux operating systems, with support for the Apple Mac OS platform planned for the future. Advertisement
Alternatively, Microsoft's Office Suite for both XP and Vista is now listed on Amazon starting at $325 for the standard version.
Rob Tidrow, a computer programmer who has written several guides to using Microsoft Office, says that "Symphony does not lack many features that even power users of Office need," according to Reuters. Tidrow has installed Symphony on the computers of his two children, and says it can meet the needs of churches, schools, and small businesses. Tidrow just finished writing IBM Lotus Symphony for Dummies. Another satisfied user is Pierre Avignon, a graphics designer from West Newbury, Massachusetts. RedOrbit reports that Avignon uses Symphony for the kind of work he used to perform on Microsoft's Word, Excel, and PowerPoint. But he says that when he tells his friends about Symphony, "As soon as you say it's free, (people) feel less comfortable. They say 'What's the catch?'" In some cases, the catch may be a time cost. For small businesses already using Microsoft Office the migration to Symphony could be complicated, canadianbusiness.com says. One way around this problem is to save documents in Adobe Systems' Portable Document Format (PDF), and e-mail them as read-only files, a solution that IBM also suggests. IBM does not offer technical support for Symphony.
Untangling accounting software that is hooked in with Excel, or collaboration platforms or content management tools that link with Word can also be difficult. "Everything is dedicated to integrate well with Microsoft Office," says Fen Yik, an analyst with Info-Tech Research Group in London, Ontario, according to canadianbusiness.com, "and that is not necessarily the case with other productivity suites."Trending
http://www.redorbit.com/news/technology/1377174/ibm_offers_free_alternative_to_microsoft_office/
But Symphony and other free programs like OpenOffice, which includes a database program and drawing software, and Google Docs are becoming attractive alternatives for businesses that do not have large technology budgets as well as for personal use. "Ninety percent of the users don't need all the functionality that Office provides," said Rebecca Wettemann, an analyst with Nucleus Research, according to Reuters. "Ninety percent of people basically just use Excel to make lists."
Microsoft Office 2013 Officially Released
Microsoft Office 2003 – Is It Worth the Upgrade?
Is Microsoft Office on Its Way Out?
Nov 25th 2015 IRS
IRS-Microsoft Case, Skadden’s Tax Work, Tax Breaks | 计算机 |
2015-48/0319/en_head.json.gz/1145 | Biographical Sketch: Eliot Christian is helping develop and
promote a global vision for information access that enhances the free flow of information
through decentralized information locator services. He helped establish this approach in
law, policy, standards, and technology at the United States Federal level, building
consensus among government agencies and developing key support among libraries and
information service organizations and corporations. He has been carrying these ideas to
other levels of government and internationally, leading the ISO Metadata Working Group and
consulting on a variety of initiatives supporting a Global Information Infrastructure.
Most recently, he has helped design the architecture for the international Global Earth Observatiuons System of Systems. Since 1990, Eliot has pursued issues of data and information management primarily from
the perspective of environment and earth science at the interagency and international
levels. He joined the United States Geological Survey in 1986, as a manager of data and
information systems with a focus on strategic planning, standards, and new technologies.
From 1975 to 1986, he managed computer resources in the Veterans Administration, helping direct six nationwide data processing centers and overseeing data management for all VA corporate databases. | 计算机 |
2015-48/0319/en_head.json.gz/1508 | New Mortal Kombat X Gameplay Video Walks Us Through Raiden's Character Variations
ClayMeow - August 5, 2014 09:07AM in Gaming
Last month, Warner Bros. Interactive Entertainment (WBIE) and NetherRealm Studios surprised nobody by revealing that franchise mainstay Raiden would be in Mortal Kombat X. Today, WBIE released a new video narrated by NetherRealm Studio's creative director Ed Boon, describing Raiden's character variations and play styles, along with some new images:
Thunder God — "The Thunder God Variation enhances Raiden's lightning attacks. This allows him to extend and perform combos that are unique only to this Variation. Thus giving Raiden the potential to do more damage."
Displacer — "In his Displacer Variation, Raiden gains the ability to teleport to multiple attack zones. This tactic can be used to move into close range, attack from behind, or even escape rushdowns, making Raiden difficult to contain and significantly more mobile."
Storm Lord — "The Storm Lord Variation gives Raiden the ability to create lightning traps. These traps can be used defensively or as a method to corner opponents. They can be used to affect wide areas, allowing Raiden to control the entire battlefield."
Mortal Kombat X will be coming to PC, PlayStation 4, Xbox One, PlayStation 3, and Xbox 360 in 2015. | 计算机 |
2015-48/0319/en_head.json.gz/1987 | A shooter is a kind of video game. The aim of the game is to beat enemies by shooting (or otherwise killing) them. The enemies shoot back. The aim of the game is to stay alive as long as possible.
Many of the oldest computer games were shooters; the first video game ever made was a shooter called Computer Space. And one of the first games that many people played was a shooter called Space Invaders.
There are lots of different kinds of shooter. Now many people like first-person shooters. But there are other kinds too. In Japan many people play shooters where the enemies fire lots of bullets. The bullets make beautiful patterns on the screen. This kind of shooter is called a barrage shooter or a curtain fire shooter. It is also called by the Japanese name, danmaku.
This short article about video games can be made longer. You can help Wikipedia by adding to it. | 计算机 |
2015-48/0319/en_head.json.gz/2137 | Service Pack 2 for Vista and Server 2008 finally arrives
Microsoft has finally released the final build of Service Pack 2 for Windows …
After a lengthy development cycle that included delays and furious testing, Microsoft has finally given the public Service Pack 2 for Windows Vista and Windows Server 2008 (final build is 6.0.6002.18005). You can download the installer from the Microsoft Download Center: 32-bit (348.3MB), 64-bit (577.4MB), and IA64 (450.4MB). There's also an ISO image (1376.8MB) that contains these installers. The installers will work on English, French, German, Japanese, and Spanish versions of either Vista or Server 2008. Other language versions will arrive later. Those interested in slipstreamed versions of Vista and Server 2008 with SP2 will need to get an MSDN or TechNet subscription. If you have any beta versions of SP2 installed, they must be uninstalled prior to installing the final version. To do this, use the Control Panel applet called Programs and Features, select View installed updates, and then under Windows look for KB948465. SP2's main requirement (assuming no incompatible drivers are detected) is that SP1 is already installed. During the beta phase, it was speculated that this is because SP2 is not yet finalized, but the truth is that SP1 is a prerequisite even now. The reason for this is size: Microsoft wants the size of SP2 to be smaller (if SP2 was cumulative, it would make for a huge download). Server 2008 shipped with SP1 already installed (meaning SP2 is actually the first service pack for Server 2008), including the contents of the SP1 client code. SP2 applies to both the client and server versions of Windows because Microsoft adopted a single serviceability model to minimize deployment. Also, by releasing one single service pack, Microsoft has less testing to do, since Vista and Server 2008 have the same binaries for all common files, making for a quicker release (SP1 was released 14 months ago). There are a few significant additions that are included in SP2: Windows Search 4.0, Bluetooth 2.1 Feature Pack, the ability to record data on to Blu-Ray media natively in Vista, Windows Connect Now (WCN) is now in the Wi-Fi Configuration, and exFAT file system supports UTC timestamps. The service pack contains 836 hotfixes. For those interested in a more complete changelog, I've included one below: Hardware ecosystem support and enhancements
SP2 adds support for the 64-bit central processing unit (CPU) from VIA Technologies, which adds the ID and vendor strings for the new VIA 64-bit CPU. SP2 integrates the Windows Vista Feature Pack for Wireless, which contains support for Bluetooth v2.1 and Windows Connect Now (WCN) Wi-Fi Configuration. Bluetooth v2.1 is the most recent specification for Bluetooth wireless technology. SP2 improves performance for Wi-Fi connections after resuming from sleep mode. SP2 includes updates to the RSS feeds sidebar for improved performance and responsiveness. SP2 includes ability to record data to Blu-Ray Disc media. Operating system experience updates
SP2 includes Windows Search 4.0, which builds on Microsoft’s search technology with improved indexing and search relevance. It also helps find and preview documents, e-mail (including signed e-mail messages), music files, photos, and other items on the computer. The search engine in Windows Search 4.0 is a Microsoft Windows� service that is also used by programs such as Microsoft Office Outlook� 2007 and Microsoft Office OneNote� 2007. Autotuning Diagnostics in SP2 now interprets current network conditions when implementing Windows scaling. This feature includes full netsh support. SP2 improves Windows Media Center (WMC) in the area of content protection for TV. SP2 removes the limit of 10 half open outbound TCP connections. By default, SP2 has no limit on the number of half open outbound TCP connections. Enterprise improvements
SP2 provides the Hyper-V virtualization environment as a fully integrated feature of Windows Server 2008, including one free instance with Windows Server 2008 Standard, four free instances with Windows Server 2008 Enterprise and an unlimited number of free instances with Windows Server 2008 Datacenter. SP2 increases the authentication options for WebDAV redirector, enabling Microsoft Office users greater flexibility when authenticating custom applications using the WebDAV redirector. SP2 provides an improved power management (both on the server and the desktop), which includes the ability to manage these settings via Group Policy. SP2 improves backwards compatibility for Terminal Server license keys. Windows Server 2008 changed the licensing key from 512 bytes to 2,048 bytes which caused clients using older Terminal Server versions to fail. SP2 allows legacy license keys on Citrix applications to work with Windows Server 2008 Terminal Server. Setup and deployment improvements
Provides a single installer for both Windows Vista and Windows Server 2008. Includes the ability to detect an incompatible driver and either block service pack installation or warn users of any potential loss of functionality. Provides better error handling and descriptive error messages where possible. Improves manageability through logging in the system event log. Provides a secure install experience. Includes the ability to service the installer post release. More details on SP2 are available on TechNet. According to this document, SP2 is scheduled to begin arriving via Automatic Updates on June 30, 2009. If you don't want to download it from the Microsoft Download Center, try checking manually for updates on Windows Update. Expand full story | 计算机 |
2015-48/0319/en_head.json.gz/2339 | The Chronicles of Spellborn: Spellborn GDC Demo
Stradden Managing EditorHalifax, NSPosts: 6,696Member March 2008 in News & Features Discussion News Manager Keith Cross recently had the opportunity to get his first look at the upcoming MMORPG, The Chronicles of Spellborn and shares his thoughts on the new game's combat, equipment, death penalty and more.
At this year’s GDC, I finally had the chance to take a look at The Chronicles of Spellborn. Ever since last year, when Managing Editor Jon Wood went to the Netherlands to tour the Spellborn studios, he’s been telling me I need to see this game in action. One reason Jon encouraged me to see this game is because it’s one of those games that doesn’t get a lot of attention compared to other MMOs of its calibre. Spellborn is nowhere near as hyped and publicized as other triple A games that are currently in development. The other reason Jon has been bugging me to see this game is because it’s full of innovation. There isn’t really one big innovative game mechanic, or revolutionary idea in Spellborn, but more of a collection of small improvements and tweaks to the standard quest based MMORPG model that look like they add up to something special.
Read it all here. Cheers,Jon WoodManaging EditorMMORPG.com0 «12» Go
Comments BadSpock Somewhere, MIPosts: 7,974Member March 2008 I've always been very interested in this game, any news on a U.S. release date and/or publisher?Last I heard they didn't have immediate plans to release in the U.S. at the same time as Europe...
Any updates on that?
But from everything I have read on their website, including their absolutely fantastic behind-the-scenes developer journals, these guys really seem to know what is going on. They understand all aspects of the MMO and communicate this and their desires for innovation and creativity, instead of merely following convention.... And more importantly, not innovating for the sake of innovation, but only changing what needs to be changed...
Other games are changing basic MMO mechanics and calling it "innovation" but really it's just a marketing tool to say "Hey we are doing this different! Buy our game!"
I do hope this game is the sleeper hit many of us want it to be, and I hope it's released in the U.S. too so I can hop in and start playing!
0 daemon TimisoaraPosts: 680Member Common March 2008 | 计算机 |
2015-48/0319/en_head.json.gz/2453 | Fedora Core 15 x86_64 DVD
The Fedora Project is an openly-developed project designed by Red Hat, open for general participation, led by a meritocracy, following a set of project objectives. The goal of The Fedora Project is to work with the Linux community to build a complete, general purpose operating system exclusively from open source software. Development will be done in a public forum. The project will produce time-based releases of Fedora about 2-3 times a year, with a public release schedule. The Red Hat engineering team will continue to participate in building Fedora and will invite and encourage more outside participation than in past releases. Fedora 15, a new version of one of the leading and most widely used Linux distributions on the market, has been released. Some of the many new features include support for Btrfs file system, Indic typing booster, redesigned SELinux troubleshooter, better power management, LibreOffice productivity suite, and, of course, the brand-new GNOME 3 desktop: "GNOME 3 is the next generation of GNOME with a brand new user interface. It provides a completely new and modern desktop that has been designed for today's users and technologies. Fedora 15 is the first major distribution to include GNOME 3 by default. GNOME 3 is being developed with extensive upstream participation from Red Hat developers and Fedora volunteers, and GNOME 3 is tightly integrated in Fedora 15." manufacturer website
1 dvd for installation on an 86_64 platform back to top | 计算机 |
2015-48/0319/en_head.json.gz/3118 | B‑VI, 6.5 Incorrect compound records in online databases - Guidelines for Examination
Part B – Guidelines for Search
Chapter VI – The state of the art at the search stage
6. Contents of prior‑art disclosures
6.5 Incorrect compound records in online databases
If an examiner retrieves a compound when interrogating a database created by abstracting source documents (e.g. patents, journal articles or books) and deriving the chemical compounds disclosed in those documents and, on reading the source document, is unable to locate the compound, this does not automatically mean that an error has been made and that the compound is not disclosed in the document. For example, disclosed compounds which are named but whose structures are not drawn are still part of the disclosure and will be abstracted. In addition, database providers use standard nomenclature in their database records, whereas authors of technical literature frequently do not. Consequently, the nomenclature used for the compound in the database record may not be the same as that used in the source document.
However, in certain cases the examiner is really unable to locate the compound in the source document, and this compound is relevant to the assessment of patentability. In such cases, the examiner may write to the database provider asking why the compound in question was abstracted from that document and where it is disclosed in it. If the reply from the database provider is not available when the search report is drafted, the document should be cited in the search report and used in the search opinion on the assumption that the compound is disclosed in the document. However, the examiner should also continue the search as though the compound did not exist. | 计算机 |
2015-48/0319/en_head.json.gz/4085 | Video game developer
A video game developer is a person who makes games on computers and other game systems. Some developers make games for only one or two types of game systems, others might even make one kind of game. Some games are only for one system. Developers might try and make a copy of such game to another, different system. Some translate games from one language to another.
Video game developers work in development companies. There are over 1,000 development companies today. A big part of that thousand is very small companies, that usually only have one or two workers - these kind of small companies make games for the Internet or mobile phones. Some development companies are big, too. They have buildings in many places and hundreds of workers.
1 Types of developers
1.1 Third-party developer
1.2 In-house developers
1.3 Independent developers
Types of developers[change | change source]
There are three types of video game developers.
Third-party developer[change | change source]
Third-party developers are video game developers that make deals with big publishers to make one game at a time. The developers are not part of the company: when the game is complete, the developers do not have to make another game for the publisher if they do not want to. Publishers will tell them exactly what they want third-party developers to do, and the developers do not have very much power to do something else.
In-house developers[ | 计算机 |
2015-48/0319/en_head.json.gz/4711 | C:\BELLBOOK\P001-100\HTMFILES\CSP0267.HTM
An Outline of the ICL 2900 Series System Architecture
J. L. Keedy1
Summary The system architecture of the ICL 2900 Series is outlined informally. Its central feature, the virtual machine concept, is described and related to virtual storage, segmentation and paging. The procedural approach is then discussed and its implementation by a stack mechanism is described. Further sections outline the protection mechanisms, and the instruction set and related features. Finally the virtual machine approach is related to global system activities.
The paper has been written such that it may he of interest to readers without a specialist knowledge of computer architecture.
Shortly after its announcement in October, 1974, the ICL 2900 Series2 was described in the popular computing press [Dorn, 1974] as little more than a copy of the B6700/7700 systems. It is easy to see how this happened, when one discovers that it is a stack oriented machine with a segmented virtual memory which makes extensive use of descriptors. In reality the implementation of these techniques is very different in the two computer families, and although a more serious attempt has been made to evaluate these differences [Doran, 1975] this is to some extent unsatisfactory since the author has, I believe, fallen into the same trap, albeit more subtly, of viewing the ICL 2900 through the eyes of someone thoroughly steeped in B6700 ideas. In fact, although the ICL 2900 has features in common with the B6700, radical differences exist, and some of the ICL 2900 features have more affinity to other systems, such as MULTICS [Organick, 1972]. Before the similarities and differences between such systems and the ICL 2900 Series can he fully appreciated, it is highly desirable that the ICL 2900 system architecture should first be understood in its own right. The real novelty of the architecture lies in the way in which its designers returned to first principles, and in the simplicity and elegance of the result. In this paper I shall therefore describe its architecture in a manner which attempts to reflect the thoughts of its designers, aiming at a level of description similar to Organick's description of the B6700 [Organick, 1973]. No attempt will be made to compare and contrast it with other systems, and it is hoped that the paper will provide an intelligible overview to readers without specialist knowledge of computer architecture.
1. The Virtual Machine
Faced with a problem to be solved using the computer, the user formulates a solution in a high level computer language such as COBOL or FORTRAN, and having satisfied himself of its correctness he will regard the resultant program as "complete." This is in one sense correct. His encoded algorithm will, if he has done his job well, be logically complete. However, even after it has been compiled, the user's program (or in more complex cases, his sequence of programs which comprise a job) must co-operate with other programmed subsystems (operating system, data management software, library routines, etc.) to solve the user's problem. The efficiency with which the problem is solved depends to a considerable extent on how the whole aggregate of necessary subsystems co-operates, and not merely on any one subsystem. It follows that it will be advantageous for a computer architecture to provide facilities for the efficient construction and execution of such aggregates, The 2900 Series explicitly recognises these aggregates, calling the environment in which each one operates a "virtual machine. "3 An aggregate itself is called a process image," its execution by a processor is a "process," and its state of execution as characterised by processor registers is its process state.
In the following sections we shall develop the idea of the virtual machine by considering its mainstore requirements, the dynamic relationship between its components, its protection requirements and its instruction set. But before we embark on this a few further remarks are necessary.
The fundamental concept, that each job runs in its own virtual machine containing all the code and data required to solve the application problem, allows the programmer to suppose that he is the sole user of the computer. But economic reality dictates that the real machine must be capable of solving several problems simultaneously, and this necessity for multiprogramming raises a set of problems which could threaten to destroy the advantages of the virtual machine approach. For example, how are the independent virtual machines co-ordinated, synchronised and scheduled? How, in view of high main storage costs, can separate process-images be permitted to have a private copy of common subsystems (e.g., the operating system)? How can virtual machines communicate with each other? Such questions will he borne in mind as we develop the concept of the virtual machine, and subsequently we shall consider them more directly, in an attempt
1Australian Computer Journal, vol. 9, no. 2, July 1977, pp. 53-62.
2References to the ICL 2900 Series in this paper are to the larger members of the new ICL range, which should not be confused with the ICL 2903 or the ICL 2904 computers.
3The term "virtual machine" has a wide variety of meanings in computer jargon. In this paper it is used consistently in the special ICL sense described here. | 计算机 |
2015-48/0319/en_head.json.gz/5141 | Countdown to June 6 IPv6 Transition Begins at Google
12 comment(s) - last by henry1971.. on Jan 29 at 9:35 PM
IPv6 will help advert a shortage of network gateway addresses
If the internet survives the SOPA menace, it will be on its way to its biggest transition in decades on June 6, 2012. That's World IPv6 Launch Day, a day where many of the internet’s largest sites will switch over to the new 128-bit address scheme.
Currently IP addresses use 32 bits, which works out to about 4.3 billion unique addresses. To combat the growing address shortage, network firms developed network address translation (NAT) techniques, which simply reassigned addresses on a local network to internal IPs and routed packets accordingly. The new limit thus became 4.3 billion unique gateways to the main internet, as local networks would not have enough machines to run out of IP addresses.
Still the problem remained so the in the late 1990s the Internet Engineering Task Force (IETF) set to work making the replacement to IPv4 -- 32-bit IP addresses. In 1998 they published RFC 2460, the 128-bit standard that would come to be known as IPv6.
By 2008, though, few had adopted the solution with less than 1 percent of internet traffic going through IPv6.
But necessity is the mother of invention and Google Inc. (GOOG) is championing the IPv6 cause. IPv6 engineer Erik Kline writes:
Just a year ago, we announced our participation in World IPv6 Day. Since then, the IPv4 address global free pool was officially depleted, each of the five regions around the world receiving one last address block. Soon after, the Asia-Pacific region exhausted its free IPv4 address pool. Hundreds of websites around the world turned on IPv6 for a 24-hour test flight last June. This time, IPv6 will stay on.
For Google, World IPv6 Launch means that virtually all our services, including Search, Gmail, YouTube and many more, will be available to the world over IPv6 permanently. Previously, only participants in the Google over IPv6 program (several hundred thousand users, including almost all Google employees [PDF]) have been using it every day. Now we’re including everyone.
Google is urging its users to go to ipv6test.google.com to test compatibility, as a handful of ISPs are believed to not be ready for IPv6.
While the UFC appears ready to go full-blast with IPv6, adoption plans in the developing world and other regions are much less developed. It may take several years before the transition is full complete.
One misconception about IPv6 is that it will make you easier to identify or track. While it will show the world your full address (local+gateway), companies could still opt to use NAT to maintain private single address networks. And you could still use NAT to obfuscate or otherwise mask your true IPv6 address. And of course all of the fears overlook the simple fact that whether it's a local or a global IP address, that an IP address cannot identify a person explicitly as there can be multiple users and/or multiple machines.
One true concern is that IPv6 brings some unique firewalling and security risks that users should be aware of, due to its different implementation.
RE: Misconception
quote: Its extremely unlikely that every keystroke is being recorded everywhere. While it is unlikely that EVERY SINGLE keystroke is recorded, what is known is that once that keystroke leaves your computer, such as happens when I post this comment, then you should regard it as "potentially recorded".As we have seen with "deleted" emails, which have ended up being used as evidence in court, "potentially recorded" can mean "permanently" (at least while the internet exists). Parent
U.S. Legal System Finally Figures Out IP Address != Specific Person
Weak IPv6 Security Leaves Computers Wide Open | 计算机 |
2015-48/0319/en_head.json.gz/6917 | Official recognition of John Vincent Atanasoff's achievement came slowly - several decades after he and Clifford Berry built the first electronic digital computer. However, before his death in 1995, he received significant honors and awards for his invention.
In a formal opinion distributed on October 19, 1973, U.S. District Judge Earl R. Larson ruled that Atanasoff and Berry had constructed the first electronic digital computer at Iowa State University in the 1939 - 1942 period. This recognition came at the end of a lengthy federal trial in which the patent for the electronic digital computer, held by John Mauchly and J. Presper Eckert, was overturned.
In recognition of his achievement, Atanasoff received numerous honors and awards, including: the Order of Cyril and Methodius, First Class, Bulgarian Academy of Sciences (Bulgaria's highest honor accorded a scientist); Iowa Inventors Hall of Fame; Plaque, Iowa State University Physics Building; Honorary Membership, Society for Computer Medicine; Doctor of Science, Moravian College; Distinguished Achievement Citation, Iowa State University Alumni Association; Doctor of Science, Western Maryland College; and National Medal of Technology presented by President George Bush in a Ceremony at the White House on November 13, 1990.
The basic principles of digital computing, conceived by Dr. John Vincent Atanasoff and first realized in his Atanasoff-Berry Computer (ABC) opened the door for the emergence of the Informational Age. - Professor Arthur Oldehoeft, chair of the Computer Science Department at Iowa State University
Home | Breakthrough Square | Tech Innovations | Mysteries | About
The views and opinions
expressed in this page are strictly those of the page author. The contents have not been
reviewed or approved by Augustana College.
Return to the Augustana College Homepage | 计算机 |
2015-48/0319/en_head.json.gz/7270 | Search D-Lib:
HOME | ABOUT D-LIB | CURRENT ISSUE | ARCHIVE | INDEXES | CALENDAR | AUTHOR GUIDELINES | SUBSCRIBE | CONTACT D-LIB
Volume 16, Number 3/4
An Introduction to the March/April Issue
Laurence Lannom Corporation for National Research Initiatives
<[email protected]>
doi:10.1045/march2010-editorial
Welcome to the March/April issue of D-Lib Magazine. As I reviewed this issue I was struck by the scope of the articles. Three of the five look at the mechanics of building, maintaining, and displaying digital collections, the fourth looks at ways to engage users to help in digital collection management, and the fifth is an opinion piece on open access business models. Taken broadly, these three dimensions, building and maintaining the collections, connecting with users, and figuring out how to pay for it all, cover much of the territory that today makes up the evolving domain of digital libraries.
We lead off with an article from Italy describing D-NET, a software toolkit for federating distributed collections. The authors analyze sustainability, an important issue in collection management. The following article examines the use of Omeka, a digital asset management tool, and addresses its "strengths and weaknesses as a software platform for creating and managing digital collections on the web." A brief video introduction to the tool, which some may find as a useful preface to the article, can be found at http://omeka.org/files/movies/touromeka.mov.
The third article in the "how to" series reports on the Museum Data Exchange. It starts with a useful historical perspective, for those who might have thought the work on digital collections began with the advent of the web browser. The article reports on the detailed analysis of the tools and techniques used in the study, but concludes by looking at the policies, which are frequently more challenging than the technologies.
The fourth article looks at crowdsourcing and its potential use in libraries. The use of volunteer labor is familiar to many in the library arena, but this new phenomenon brings the potential of a many-fold increase in the productive connection between libraries and their users.
At the fifth position, we have an opinion piece from Don King, a distinguished statistician who has been examining publishing and library issues for many years. The position he arrives at may or may not be feasible, which he admits, but I think most will find his analysis at least interesting, and some will find it compelling.
Finally, don't miss our Featured Collection (I recommend the link to the color version and then on to the full resolution version) and our conference report, which gives another perspective on open access.
Laurence Lannom is Director of Information Management Technology and Vice President at the Corporation for National Research Initiatives (CNRI), where he works with organizations in both the public and private sectors to develop experimental and pilot applications of advanced networking and information management technologies.
Copyright © 2010 Corporation for National Research Initiatives | 计算机 |
2015-48/0319/en_head.json.gz/7286 | Comparing an Integer With a Floating-Point Number, Part 1: Strategy
We have two numbers, one integer and one floating-point, and we want to compare them.
Last week, I started discussing the problem of comparing two numbers, each of which might be integer or floating-point. I pointed out that integers are easy to compare with each other, but a program that compares two floating-point numbers must take NaN (Not a Number) into account.
More >>Reports Strategy: The Hybrid Enterprise Data Center SaaS and E-Discovery: Navigating Complex Waters More >>Webcasts Transforming Operations - Part 1: Managing Outsourced Development in Telecommunications Architecting Private and Hybrid Cloud Solutions: Best Practices Revealed More >>
That discussion omitted the case in which one number is an integer and the other is floating-point. As before, we must decide how to handle NaN; presumably, we shall make this decision in a way that is consistent with what we did for pure floating-point values.
Aside from dealing with NaN, the basic problem is easy to state: We have two numbers, one integer and one floating-point, and we want to compare them. For convenience, we'll refer to the integer as N and the floating-point number as X. Then there are three possibilities:
N < X.
X < N.
Neither of the above.
It's easy to write the comparisons N < X and X < N directly as C++ expressions. However, the definition of these comparisons is that N gets converted to floating-point and the comparison is done in floating-point. This language-defined comparison works only when converting N to floating-point yields an accurate result. On every computer I have ever encountered, such conversions fail whenever the "fraction" part of the floating-point number — that is, the part that is neither the sign nor the exponent — does not have enough capacity to contain the integer. In that case, one or more of the integer's low-order bits will be rounded or discarded in order to make it fit.
To make this discussion concrete, consider the floating-point format usually used for the float type these days. The fraction in this format has 24 significant bits, which means that N can be converted to floating-point only when |N| < 224. For larger integers, the conversion will lose one or more bits. So, for example, 224 and 224+1 might convert to the same floating-point number, or perhaps 224+1 and 224+2 might do so, depending on how the machine handles rounding. Either of these possibilities implies that there are values of N and X such that N == X, N+1 == X, and (of course) N < N+1. Such behavior clearly violates the conditions for C++ comparison operators.
In general, there will be a number — let's call it B for big — such that integers with absolute value greater than B cannot always be represented exactly as floating-point numbers. This number will usually be 2k, where k is the number of bits in a floating-point fraction. I claim that "greater" is correct rather than "greater than or equal" because even though the actual value 2k doesn't quite fit in k bits, it can still be accurately represented by setting the exponent so that the low-order bit of the fraction represents 2 rather than 1. So, for example, a 24-bit fraction can represent 224 exactly but cannot represent 224+1, and therefore we will say that B is 224 on such an implementation.
With this observation, we can say that we are safe in converting a positive integer N to floating-point unless N > B. Moreover, on implementations in which floating-point numbers have more bits in their fraction than integers have (excluding the sign bit), N > B will always be false, because there is no way to generate an integer larger than B on such an implementation.
Returning to our original problem of comparing X with N, we see that the problems arise only when N > B. In that case we cannot convert N to floating-point successfully. What can we do? The key observation is that if X is large enough that it might possibly be larger than N, the low-order bit of X must represent a power of two greater than 1. In other words, if X > B, then X must be an integer. Of course, it might be such a large integer that it is not possible to represent it in integer format; but nevertheless, the mathematical value of X is an integer.
This final observation leads us to a strategy:
If N < B, then we can safely convert N to floating-point for comparison with X; this conversion will be exact.
Otherwise, if X is larger than the largest possible integer (of the type of N), then X must be larger than N.
Otherwise, X > B, and therefore X can be represented exactly as an integer of the type of N. Therefore, we can convert X to integer and compare X and N as integers.
I noted at the beginning of this article that we still need to do something about NaN. In addition, we need to handle negative numbers: If X and N have opposite signs, we do not need to compare them further; and if they are both negative, we have to take that fact into account in our comparison. There is also the problem of determining the value of B.
However, none of these problems is particularly difficult once we have the strategy figured out. Accordingly, I'll leave the rest of the problem as an exercise, and go over the whole solution next week.
Application Intelligence For Advanced DummiesAppGyver AppArchitect 2.0 AppearsGoogle's Data Processing Model Hardens UpSencha Licks Android 5.0 Lollipop, And LikesMore News» Commentary
Abstractions For Binary Search, Part 10: Putting It All TogetherSmartBear Supports Selenium WebDriverThriving Among the APIsMirantis Releases Free Developer EditionMore Commentary» Slideshow
Jolt Awards 2014: The Best UtilitiesJolt Awards: Coding ToolsJolt Awards 2013: The Best Programmer LibrariesDeveloper Reading ListMore Slideshows» Video
IBM Mobile Developer ChallengeIntel at Mobile World CongressStephen Wolfram InterviewPTECH: Educating for InnovationMore Videos» Most Popular
A Simple and Efficient FFT Implementation in C++: Part IiOS Data Storage: Core Data vs. SQLiteRead/Write Properties Files in JavaBuilding Scalable Web Architecture and Distributed SystemsMore Popular» INFO-LINK
Transforming Operations - Part 1: Managing Outsourced Development in Telecommunications Agile Desktop Infrastructures: You CAN Have It All Architecting Private and Hybrid Cloud Solutions: Best Practices Revealed Advanced Threat Protection For Dummies ebook and Using Big Data Security Analytics to Identify Advanced Threats Webcast IT and LOB Win When Your Business Adopts Flexible Social Cloud Collaboration Tools More Webcasts>>
Cloud Collaboration Tools: Big Hopes, Big Needs Strategy: The Hybrid Enterprise Data Center SaaS and E-Discovery: Navigating Complex Waters SaaS 2011: Adoption Soars, Yet Deployment Concerns Linger Research: State of the IT Service Desk More >> | 计算机 |
2015-48/0319/en_head.json.gz/7356 | 2 projects tagged "quarantine"
clamav (1)
sendmail (1)
Postfix (1)
spam-filtering (1)
virtual domain (1)
SpamCheck
SpamCheck is an email scanning and quarantine system for Linux systems. Making use of a number of open source technologies, including SpamAssassin, Exim, and MySQL, it provides an easy-to-use, but powerful method to filter email for your domain. Once configured, multiple domains can be added and administered with the Web interface. Email is scanned and scored, non-spam is then passed on to a destination email server, while spam is either blocked or quarantined. Individual users can log in and review their quarantined email, and manage their settings through the Web-based interface.
GPLv3EmailFilteranti-spamquarantine
Clement is an email server application. Its main function is to block unwanted mail (spam) as soon as possible in the email exchange process. It accepts or rejects email while the SMTP session, initiated by the email sender, is still pending, accepting legitimate email messages without the need to return an error status to non-existent or "borrowed" return address later. Clement can operate in two modes, either as a standard MTA (as sendmail, Postfix, Exim, Exchange, etc.) to store email in the recipient's own area, or to transmit the mail to an another SMTP server acting as smart spam filtering device. Each email domain name Clement knows about can be treated in one of these two modes depending on the group to which the domain name has been set. Each message is verified by a virus scanner (ClamAV) while the SMTP connection is still open, but the refusal of mail and the reason for refusal is notified to the actual sender. Mail management is done via a Web interface and can be delegated to three administrative levels (Root-Admin, Group-Admin, Domain-Admin). Standard users can access their own logs (sent email status, email rejected, quarantined email, etc.). With this interface, the user can handle the rejection and acceptance of mail. Users who are level "Admin" can access the session logs (via the Web interface). Clement uses a SQL database (PostgreSQL, MySQL) to store and manage logs, user profiles, and dynamic management of directives concerning the sender-receiver relationship.
GPLv2LDAPFirewallSMTPFilter
twyg
A generative tree visualiser for Python.
A Web-based interface for Unix system administration. | 计算机 |
2015-48/0319/en_head.json.gz/7368 | PC getting Burnout
by: John
This is interesting. Looks like Electronic Arts is going to be putting the console hit Burnout Paradise on the PC. Not only that but it's going to be rebuilt specifically for the PC. Don't know what that entails yet but I'm wondering if this won't be another test bed for their new copy protection scheme with SecuRom. With all the peripherals available for the PC, you can build a pretty kicking rig to race around the city.
EA’s Burnout Paradise Revs Its Engines on the PC
REDWOOD CITY, Calif.--(BUSINESS WIRE)--Criterion Games, a studio of Electronic Arts, (NASDAQ:ERTS) today announced that the award-winning driving game Burnout™ Paradise is being rebuilt specifically for the PC. Burnout Paradise will be the first Burnout title ever made for the PC, customized with expanded multiplayer, enhanced online features, and community driven content.
Originally released for the PLAYSTATION®3 computer entertainment system and Xbox 360™ videogame and entertainment system, Burnout Paradise has won over 55 awards worldwide. Burnout Paradise delivers an open-world environment built for intense speed, excitement and exploration and sets a new standard in the seamless transition from single-player offline to social online gameplay.
Burnout Paradise for the PC will combine all the open world racing, intense speed and action of the original game with new gameplay for the PC version.
Gamers can tune into a live webcast at http://criteriongames.com at 8:00 AM PST on Friday, May 9 for more details on this announcement. For more information about Burnout Paradise, please visit http://criteriongames.com or the EA press website at http://info.ea.com.
Burnout Paradise for the PC has not been rated by the ESRB or PEGI. | 计算机 |
2015-48/0319/en_head.json.gz/8569 | Site Info Whois Traceroute RBL Check Site Info
What's My IP? Enter Domain Name or IP Address:
The search yielded no results.
Take a look at our directories:
IP Addresses Websites Domains TLDs
Would you like to get information for a domain name, host or IP address?WHOIS is a database service that allows Internet users to look up a number of matters associated with domain names, including the full name of the registrant of the domain name, the date when the domain was created, the date of expiration, the last record of update, the status of the domain, the names of the domain servers, the name of the hosting service, the IP address corresponding to the domain name, and the name of the registrar.
IP whois:
Domain whois:
Would you like to find detail information about a web site?Site Info is a webmaster tool which provides information about key areas across the website and about how a page is built. Site Info is a service that gathers detailed information about websites: general information, description, target keywords, tags, ranks, site response header, domain information, DNS information, host location, IPs etc.
Site Info:
Tags search:
Would you like to know your IP address?You need to know your IP address if you play online multiplayer gaming or you would like to use a remote connection for your computer.
Do you want to know why a website or IP address is unreachable and where the connection fails?Trace Route is a webmaster tool with capabilities to show how information travels from one computer to another. Trace Route will list all the computers the information passes through until it reaches its destination. Traceroute identifies each computer on that list by name and IP address, and the amount of time it takes to get from one computer to another. If there is an interruption in the transfer of data, the Traceroute will show where in the chain the problem occurred.
DNS Black List Checker
Would you like to know if a web site or IP address is listed in Multi DNS blacklist or Real-time Blackhole List?The RBL tool searches by IP address the database of the Domain Name System (DNS) blacklist (DNSBL) and the Real-time Blackhole List (RBL). The RBL displays the server IP addresses of internet service providers whose customers are responsible for spam. If a web site has IP addresses in DNSBL or RBL it can be invisible for the customers who come from Internet Service Provider (ISP) who uses DNSBL or DNSBL to stop the proliferation of spam.
IP Index TLD Index Domain Index Site Index Copyright © 2015 Cybernet Quest. | 计算机 |
2015-48/0319/en_head.json.gz/8950 | Big Data Analytics 2013 - Microsoft Research
Big Data Analytics 2013
RASP: Large-Scale Graph Traversal with SSD Prefetching
Eiko Yoneki (University of Cambridge), Karthik Nilakant (University of Cambridge), Valentin Dalibard (University of Cambridge), Amitabha Roy (EPFL)
Mining large graphs has now become an important aspect of multiple diverse applications and a number of computer systems have been proposed to efficiently execute graph algorithms. Recent interest in this area has led to the construction of single machine graph computation systems that use solid state drives (SSDs) to store the graph. This approach reduces the cost and simplifies the implementation of graph algorithms, making computations on large graphs available to the average user. However, SSDs are slower than main memory, and making full use of their bandwidth is crucial for executing graph algorithms in a reasonable amount of time. We present RASP (the (R)un(A)head (S)SD(P)refetcher) for graph algorithms that parallelises requests to derive maximum throughput from SSDs. RASP combines a judicious distribution of graph state between main memory and SSDs with an innovative run-ahead algorithm to prefetch needed data in parallel. This is in contrast to existing approaches that depend on multi-threading the graph algorithms to saturate available bandwidth. Our experiments on graph algorithms using random access show that RASP not only is capable of maximising the throughput from SSDs but is also able to almost hide the effect of I/O latency. The improvements in runtime for graph algorithms is up to 14 X when compared to a single threaded baseline. When compared to sophisticated multi-threaded implementations, RASP performs up to 80% faster without the program complexity and the programmer effort needed for multithreaded graph algorithms. Follow us | 计算机 |
2015-48/0319/en_head.json.gz/9499 | Worms Armageddon - Review
22/12/1999MicroproseTeam 174G$89.95
VMU Game
PAL Border
NoHard33 BlocksYesNoSmall
One of Europe's premiere developers during the days of the Amiga was Team 17. In recent years however their releases have been fairly quiet but fortunately for Dreamcast owners they have decided to develop several titles on Sega's new system. Worms Armageddon is the first game on the system and is a sequel to the previous 2 worms games which appeared on PC and Playstation.
The intro for worms sets the tone of the game perfectly. The humorous intro sees a couple of worms killed while laughing at another whom is about to be killed by explosives. From then on there is little doubt that this game is designed to be fun, and it certainly appears to be.
The idea behind Worms Armageddon is simple. After selecting a team of worms you are placed in a battle field (which is randomly generated map) with opposing worms also appearing in the same area. The game is turn based combat with the idea being to blow up the enemy before they blow you up. But it's not as easy as it sounds. The wind constantly changes strength and direction and this will have differing effects on different weapons (missiles are affected, hand grenades are not). Your will also have to work out the trajectory and power given to the weapon before launching it. During battles crates will occasionally fall from the sky and provide different bonuses such as special weapons, health or utilities such as jet packs or laser guidance equipment.
The weapons in Worms Armageddon are some of the most varied ever in a video game. Not only are you equipped with fairly standard shotguns, hand guns, missiles, hand grenades and mines but also an assortment of odd weapons including exploding sheep, an old woman (who explodes), super banana bomb or a homing pigeon. These are just some of the 55 weapons on offer.
The graphics in Worms Armageddon are rather simple. Don't expect the game to set any new standards in 3D modeling. In fact, don't expect any 3D graphics at all. The game world is presented in 2D with a side on view of the action. The 2D graphics are very crisp and there is excellent variety in the background graphics.
Worms Armageddon has a wide selection of music. Fortunately, unlike a lot of puzzle/strategy games, the music isn't annoying, and sets the mood in each level perfectly. One thing that does make this game stand out, are the excellent sound effects. While the explosions are nothing that you wouldn't expect, it is possible to select the dialect for your worms when they talk or cry out in pain. There are about 30 different languages in total to select from, including a rather dubious Australian accent.
While everything is pointing to a great game so far there is one major problem with the game. Even though the Dreamcast boasts a 200Mhz SH-4 CPU the game can be frustratingly slow in single player mode. While you might finish your turn in a couple of seconds the computer opponents take an eternity, sometimes up to 30 seconds to decide what to do. There is no way to speed this wait up and after a few games it becomes very annoying. In 2 player mode this problem doesn't come into effect, but for a game with so few CPU calculations required and limited graphics this is an inexcusable mistake.
Don't get me wrong, Worms Armageddon is a good game overall. Unfortunately as a single player game the amount of time spent waiting for the CPU opponent takes away most of the fun. As a multi player game Worms Armageddon is the best there is. It's a shame that this game wasn't held back for internet play as it would have been perfect. If you've got a lot of friends, don't hesitate to buy this game. If your on your own, you may wish to look elsewhere.
74%79%67%(88% 2+ players)71%77% | 计算机 |
2015-48/0319/en_head.json.gz/9697 | kiwanja.net
Inspiring social change
In the same year that Apple introduced their first personal computer – and a full four years before IBM came on the scene in 1981 – Commodore launched their first PET. While the Apple II quickly gained popularity among home users the PET, with its robust metal casing, caught the eye of the education establishment and became a big hit in schools.
Meanwhile back in Jersey, Freddie Cooper (Mr. C. to his friends), a qualified teacher, was running The Learning Centre – a combined social club and educational centre – where he carried out private tutoring for children with learning difficulties in addition to a whole range of sporting/social activities for local kids. He soon realised the potential of personal computing, and began to work on Computer Aided Learning (CAL) techniques. Freddie Cooper was a keen user of infant, emerging CAL techniques.
A couple of years later I joined the club, pretty much the only place to do anything on the estate where I lived at the time. It was housed in a building next to St. Michael’s School (pictured) – a posh private boarding school. They always seemed to be trying to buy it from Freddie Cooper but never managed to until 2002 when he finally retired. In addition to mini football pitches, full-size snooker and pool tables, table tennis and more creative ‘arty’ activities, there was the odd computer and games console floating around (such as the fantastic Atari 2600).
I quickly became fascinated by the Commodore PET, and spent each of my allotted half-an-hour slots looking through the code rather than playing the games themselves. (In those days software was loaded manually via a cassette player, and then manually run (unless it was clever, and executed automatically). Before running you could use the appropriately-titled LIST command see the code on-screen).
I began playing around with the code, and talked Mr. C. into letting me print off portions on his dot matrix printer. Realising that I had a bit of a knack with the PET, I began experimenting with my own programs, and within a short space of time started writing basic CAL software for a couple of pounds a shot. Over time these got more and more complex, and Reading University took an interest in what we were doing. Sadly nothing came of it. I did get a rise to £5 per program – which were now being tailored to suit the specific needs of each of Freddie’s students.
At the age of 16 I was approached by a local software company and offered work. I did the sensible thing and decided to see my education through – take note, Bill Gates. The reference that Mr. C. wrote for me back then survives to this day.
Other casual programming work did come my way, writing games and bits of demo software for local business-machine suppliers. This gave me the chance to get my hands on some of the latest technology without ever being able to afford to buy any of it – ‘beauties’ such as the Commodore VIC-20, Commodore 64, the Acorn/BBC Micro and last but not least, the Sinclair Spectrum – the 48k model, I hasten to add.
As for the career in computer programming, it never quite happened – although my skills did come in handy many years later when I set out to create the first version of FrontlineSMS.
Recent blog posts You might not change the world. But you can make it a better place.
Joining CARE as their Entrepreneur in Residence
What technology-for-conservation might learn from technology-for-development
The case of We Care Solar and our failure to spot winners
Halting the push-push of global development
Lone innovators of the world unite
Want a holistic view of the world of social innovation? Try these four books.
1995. 2005. 2015. Two decades of code
kiwanja on TwitterMy Tweets kiwanja on Facebook Twitter
| Theme: sela by WordPress.com. | 计算机 |
2015-48/0319/en_head.json.gz/9753 | Microsoft details Windows 8's file explorer
updated 01:55 pm EDT, Mon August 29, 2011
Microsoft outlines changes to Windows 8 Explorer
Microsoft on Monday took the time to detail Windows 8's file explorer on its Developer Network blog. It's being described as the first substantial change to the most widely used desktop tool in a long time. The improvements are meant to eliminate the need for replacement add-ons to be used by power users when managing files on their computers.Microsoft looked at telemetry data to judge which functions are most common and which areas need the most changes. It found that the majority of the frequently used commands are hidden in sub-menus, so making them more accessible in a refined user interface was a priority. Also, customer requests to bring back some Windows XP features were also considered. These included the 'Up' button, the cut, copy and paste into the top-level user interface and a more customizable command surface. More keyboard shortcuts were also requested.
The team then set out three main goals when redesigning Explorer in Windows 8. These included optimizing it for file management tasks, streamlining the commands and organizing them and do both while maintaining Explorer's heritage. After looking at several options, the team chose an Office-style ribbon. It allowed the placement of the most important commands in prominent locations, grouped them accordingly and puts about 200 commands in an easy and consistent view without needing menus, pop-ups, dialog boxes and right-click menus. It should also be familiar to many existing customers. It will also translate better with touch interfaces. It also provides keyboard shortcuts for all commands in the ribbon and lets users customize it with the quick access toolbar.
The ribbon houses three main tabs: Home, Share, and View, along with a File menu and a variety of contextual tabs.
A new Search Tools contextual tab launches when the search box is clicked. It lets users filter results by date ranges, file type, size and author or name. Searches can then be saved for later use. Library, Picture, and Disk Tools are other contextual tabs.
Screen real estate was another user concern. As widescreen formats are the most common, the team optimized the new Explorer for this layout by removing the header at the top of the main view and moving the Details pane to the right side. A one-line status bar at the bottom of the window shows critical information. The new look allows two more lines of files compared to Windows 7 on a 1366x768 resolution screen. Closing the ribbon grants even more vertical real estate.
The Quick Access Toolbar a | 计算机 |
2015-48/0319/en_head.json.gz/10427 | To sign up for updates or to access your subscriber preferences, please enter your email address.
Privacy Act Statement:The collection of your personal information for this U.S. Census Bureau subscription form is authorized under 5 U.S.C. 301 and 44 U.S.C. Section 3101. The purpose of collecting this information is to respond to inquiries or requests, in support of activities or services including social media, subscription email service to disseminate news and information, and blogging capability. Personally identifiable information such as an email address and mobile telephone numbers may be collected. Disclosure of this information is permitted under the Privacy Act of 1974 (5 U.S.C. Section 552a) to be shared among Census Bureau staff for work-related purposes. Disclosure of this information is also subject to all of the published routine uses as identified in the Privacy Act System of Records Notice COMMERCE/DEPT-19, Mailing Lists. Providing this information is voluntary and you may be removed from the subscription at any time. Failure to provide this information may affect the Census Bureau’s ability to disseminate news information to subscribers.
Are You in a Survey?
Diversity @ Census
Help With Your Forms
Statistics in Schools
Tribal Resources (AIAN)
Statistical Abstract
Special Census Program
Fraudulent Activity & Scams
CONNECT WITH US Accessibility | Information Quality | FOIA | Data Protection and Privacy Policy | U.S. Department of Commerce | 计算机 |
2015-48/0319/en_head.json.gz/10578 | David Marcus
Review: Line Rider iRide
Line Rider iRide is an extension of the web phenomenon that hit the scene a couple of years ago. What started as a simple application that simulated the physics of a guy surfing down a 2D track turned into... well, the same thing, but people have gone nuts with it. Take this video for example. I am in awe, and you probably are too. Keep in mind this was done on a PC, and iRide is the iPhone version of the Line Rider track creator. This has some obvious ups and downs.
Simple and Easy to Learn
All there really is to do in this app is draw your track and hit play. Whether that’s a few swipes or a Herculean labor of love taking up hours of your time is up to you.
Export and Share your Tracks
I think the main point of making an elaborate track is so others can see it and praise you for your genius. This is possible through iRide - just create an account with linerider.com and use the in-app exporter. This can all be done on your iPhone.
Change it Up
Options like changing the colors to a night scheme and controlling the world’s gravity by tilting your iPhone add new dimensions to the gameplay (well, the latter does).
iRide is a mobile port of the full app, and as far as I know there’s not much missing. You can work with multiple different types of track, all of which can be passed through on at least one side. Regular track just lets gravity run its course. Acceleration track does exactly what you think it does. Scenery track is used to create lines that have no collision, which enables extra creativity.
At first glance I thought this was an incredibly simple app, but after playing around with it for a bit I began to appreciate some of the depth that’s possible here. It depends on the skill and dedication of the player. If you’re not great with visual and spatial things, you may be frustrated with trying to line up a jump. However, with some artistic skill and a knack for level design, you can create some amazing tracks like the one above.
I never got into creating Line Rider tracks, but having messed around with this app might get me there. Physics simulations are always fun, and the mix of control and limitations here make for a number of possibilities. I especially wonder about what a talented designer could do with the gravity feature - requiring players to tilt one way or the other at a certain point to affect the outcome of the ride.
The Bad:
The touch controls are nowhere near as precise as a mouse, which may make it tough to make a truly incredible track on your phone. I was able to make a couple of decent tracks, but it took some finagling. While for some people this may become a mobile gaming obsession, for most the fun won’t last forever. If it’s not your thing it’s not your thing.
If you’ve got money to burn and are curious, give this a try. If you’re feeling a little more cautious, head to the Line Rider website and give it a test run - it’s free. Essentially you’re paying for portability here, and three dollars is just about the most I’d be willing to pay for it. All that said, this is Line Rider and it’s on your iPhone, so if you’re a fan then I’d be surprised if you didn’t have this already. I enjoyed it and look forward to making some more theme-based tracks when I get the time. | 计算机 |
2015-48/0319/en_head.json.gz/10895 | Frontier Kernel
Developing the Frontier kernel in the 21st Century
Posted by Dave Winer, 7/15/04 at 2:18:52 PM.
What is Frontier? It's a high performance Web content management, object database, system-level and Internet scripting environment, including source code editing and debugging.
Why release Frontier as open source? I explained the rationale for releasing Frontier as open source on Scripting News on May 17, 2004. Here's a summary.
1. After I left UserLand in the summer of 2002, it became largely a company that markets and develops Manila and Radio. My concern was when will UserLand get around to enhancing and improving the "kernel" -- the large base of C code that runs Manila and Radio -- the scripting language, object database, verb set, server, multi-threaded runtime, content management framework. It's been several years since there was a meaningful update of that code. By releasing the code, the hope is that we will be able to pick up work on the kernel and start fixing some bugs and adding some long-waited-for features.
2. Products that Manila and Radio compete with don't have their own kernels, they build off development environments created by others. For example, Movable Type is written in Perl. WordPress is PHP. Blogger is Java. UserLand's products are different because they build on a private platform. For a long time we saw this as an advantage, the UserLand runtime is very rich and powerful, and offered performance benefits. When a new layer came on, for example the CMS, when it got stable and mature, we'd "kernelize" it, so it would be super-fast. But experience in the market said that, to succeed, UserLand didn't need to own its kernel. In fact, that it was the only developer using this kernel may well have been a liability for UserLand. 3. In 1987 we sold Living Videotext to Symantec, and along with it, sold them our products, ThinkTank, Ready and MORE. I appreciate what Symantec did for us, I'm still living off the money I made in the public stock offering, but the products died inside Symantec. I'm not blaming them for that, because it's very likely they would have died inside Living Videotext had we not been acquired. But some good products disappeared. To this day people ask me what became of MORE, and tell me how advanced it was, and how nothing has replaced it. It's a sad story, and a shame, that the art of outlining took such a hit. I swore this would never happen again. There are a lot of good ideas in that base of software that you won't find elsewhere. If it disappeared it would be a loss like the MORE loss.
What are your expectations? Even if no bugs get fixed, if no features get added, if no new OSes are supported, it will be worth it, because its future will be assured. Why use the GPL? The GPL is the right license for our goals. We want to encourage developers to add features compatibly, so that old Frontier apps run in the new environment(s). If commercial developers want to add private features to the kernel, we will try to work with them, we just want to be sure we can have a conversation about compatibility, and perhaps create revenue to fund open source development. If a non-commercial project emerges that breaks compatibilty, because the GPL is used, we will have the option of bringing their work into compatibility.
In the past, Rule 1 for Frontier development has been No Breakage. It seems like that should still be part of the culture of the community. Apps matter. Ideas for projects Everyone wants a Linux version of Frontier.
We would love to have other scripting languages, especially Java and Python, running inside Frontier.
Convert it to build with other tools.
Utility to kernelize scripts.
Deeply integrate a BitTorrent server.
Which roots will be released? To begin, we have limited the release to the Frontier kernel, written in C; and Frontier.root, which is the main object database, which contains script-implemented verbs, and default data for Tools. Frontier won't run without Frontier.root in the application folder.
The current UserLand distribution includes a few root files that implement higher level functionality, such as mainResponder.root, which is a rich application server, prefs.root, which implements an HTML preferences interface, and manila.root, which implements the Manila content management system. All roots except manila.root will be released under the GPL sometime in the near future, after the dust settles from the initial kernel release. In addition, certain parts of Radio.root, the counterpart of Frontier.root for the Radio UserLand product, will also be packaged and released as open source, notably upstreaming, which is a core technology and is broadly useful, and could use some performance enhancements and fixes. The page rendering technoloogy in Radio may also be released. However, the products, Manila and Radio UserLand, remain in UserLand Software, and are not and will not be licensed as open source.
If you have other questions... Please post them here.
Last update: Tuesday, September 28, 2004 at 9:54:39 AM
Verb docs
Referers
What is Frontier?
A high performance Web content management, object database, system-level and Internet scripting environment, including source code editing and debugging.
Unless otherwise labeled by its originating author, the content found on this site is made available under the terms of an Attribution/Share Alike Creative Commons license. | 计算机 |
2015-48/0319/en_head.json.gz/11156 | TheDeanBeat: Kickstarter isn’t indie gamemaker Double Fine’s only funding route
Dean Takahashi August 10, 2012 8:00 AM
Tags: Double Fine Productions, featured, Greg Rice, The DeanBeat, Tim Schafer Double Fine Productions showed its creativity when it raised $3.3 million via crowdfunding site Kickstarter to fund Double Fine Adventure, an upcoming adventure game that’s being made with your money.
But that’s not all Double Fine is working on, and it’s not the only way the San Francisco company has raised money. The story behind Double Fine’s creative financing shows the lengths the company will go to in order to not just survive as an indie game development studio but to thrive and earn more money to fund future games — while preserving its independence.
“We’re making a switch from console work-for-hire and going to direct to consumer and free-to-play projects,” said Justin Bailey (pictured right), the vice president of business development at Double Fine. “That process has taken place over the last 18 months.”
If you think of the Kickstarter money as project-specific, or a kind of seed round, then the next batch of money may come from real investors, so that Double Fine can move to more projects and more platforms, said Greg Rice, the producer of Double Fine Adventure.
The 12-year-old game studio has been very dependent on game publishers in the past. Schafer left his job as a game designer at LucasArts in 2000 and started Double Fine Productions that year. The company created Psychonauts, an acclaimed original platform game that was published by Majesco for the original Xbox in 2005. Then it made Brütal Legend, a comic rock-star adventure game that Electronic Arts published in 2009.
During this time, Schafer established himself as one of the zany, most creative people in the video game business. He was hilarious as the emcee of one of the recent game awards shows, and his creativity is reflected in the humor and original stories in his adventure games. His personality came through in his various Kickstarter promo videos, and that no doubt helped his company raise even more money via fans.
“We try to be as creative with our business development as we are with our games,” said Schafer. “We are always on the lookout for ways to break the traditional mold for game funding. So when we see new opportunities come up — like Kickstarter, angel investment, or other alternative funding models — even though they might seem new and risky at the time, they are also very attractive to us. Because, let’s face it, anything beats the traditional game funding model. It’s like a loan with a really horrible interest rate. No revenue usually until you’ve not just paid back the development cost, but paid it back many times over. Plus, lots of entanglements with intellectual property usually.”
Double Fine might have continued in that pattern, but its games weren’t megahits (despite Psychonauts receiving heaps of critical acclaim), and publishers have become even more risk averse these days. Double Fine has managed to get traditional deals with both Sega and Microsoft, making games for them.
But on top of this, Schafer turned to Kickstarter, where he raised far more money than he ever hoped to get for his next big game, Double Fine Adventure. Kickstarter allowed Double Fine to fund a big game without giving up any equity. And if that game does well, it will likely fund yet another game.
Yet the company’s ambitions run far beyond what it can do with the Kickstarter money. The company hasn’t really talked about the other options it has developed, until now. A year ago, a rich guy named Steve Dengler pinged Tim Schafer, the chief executive of Double Fine, via Twitter. Dengler, the founder of currency exchange web site Xe.com, was a longtime fan and fronted $1 million so that Double Fine could make more games. The money from Dengler, a kind of superangel, went toward the first port of Psychonauts for the Macintosh, as well as the publication of Stacking and Costume Quest on the PC. Outside investment has also spurred the company’s first iOS game (for Apple iPad, iPod Touch, and iPhone).
“I’m a fan with money,” said Dengler. “That pretty much sums it up. I’ve been a fan of Double Fine for years, and now I get to help them make new games on their own terms. Traditionally, a developer needed a publisher to get their work made and out to the fans. And traditionally, that relationship was pretty one-sided. But together, we are changing that.”
Dengler says he doesn’t want to mess with Double Fine’s success.
“What I do want to do is help them make great games for their fans because I am one of those fans,” said Dengler. “And so far it’s working wonderfully. It’s tremendously satisfying.”
Schafer said, “The great thing about having multiple teams at Double Fine is that we can experiment. We can try out different game genres, platforms, and sizes, and we can try out different funding models. So when someone like Steve Dengler sends a tweet my way, asking how much it would cost to port Brütal Legend to PC, I’m in a position to have a serious conversation with him. I don’t have to say, ‘Sorry, everybody’s busy on this one big game we’re making over here.'”
Schafer added, “Steve is great because he is literally an angel investor — he came out of the clear blue sky, has mysterious powers, and he only uses them for good. Oh, and he can fly, too…in his Cessna. He loves games, he likes Double Fine, and he wants to remove the money obstacle from our path and help us achieve our creative ambitions. The best thing about a partner like Dracogen is the creative freedom. There’s no bureaucratic overhead like time-wasting green-light committees and milestone acceptance tests. We get to focus on making the game good because we have his trust, and in exchange for that, we offer him complete transparency into the product. Mutual trust and mutual respect is critical in this kind of relationship.”
Double Fine is also self-publishing games on Steam (such as the Mac port for Psychonauts and Costume Quest on the PC), the digital distribution service owned by Valve. It gives a commission to Valve, but it no longer has to share a cut with game publishers. Double Fine also eventually hopes to publish games directly on its own web site where consumers could download and play games directly, with no middleman at all. With a free-to-play game, Double Fine will figure out how to reap profits from the sale of virtual goods. It is through these measures that the company, which has only 60 employees, hopes to stay both small and independent.
Such creative financing has enabled the company to get at least five different game efforts under way at once.
“It is complicated to keep straight, but we have crowdfunding, self-publishing, the mobile studio, and some legacy business,” said Bailey. “We are now majority-funded by crowdfunding or outside investment. By next year, hopefully that transition will be complete,” with almost no traditional publishers or work-for-hire deals funding the games.
But if it raises a further round of money from outside investors, such as venture capitalists, then Double Fine could experiment further and take its games to all the platforms where people want to play, said Rice. VCs often bring a lot of partnerships with them that could prove very valuable to indie gamemakers. And while venture capitalists have funded a lot of fresh mobile and social game teams, Double Fine has a team that has had a longer track record.
“Our advantage is we have developers who already know how to make triple-A content,” said Bailey.
Double Fine’s doors are open. Just as the Kickstarter fans came rushing in to give the company their money, now the venture capitalists can do so. It’s fresh territory for the company, which doesn’t have a lot of experience working with Silicon Valley VCs. But it shows that having multiple irons in the fire and multiple sources of investment are important parts of any diversified indie game company that is in it for the long haul.
“We’ve needed a lot of money up front from publishers in the past, but over 12 years, we’ve been in transition,” said Bailey. “The publisher model is changing.”
“And indie gaming is growing up a little bit and maturing as a business process,” said Bailey. “Our goal is to fund ourselves as an independent game developer. You want a diversified approach. The nicest thing is the indie community is tight, and we are all trying to help each other in every way.”
Over time, the company will still work on what it considers to be triple-A games. But console titles, where a publisher funds a huge game project for publication on just one platform, are going to be less and less likely. Ouya is another option as an open indie game console platform, and Double Fine is watching it closely, but it’s still early to commit to it.
“[Ouya’s] philosophy falls in line with ours,” said Rice.
And one day, if Double Fine is successful, it might fund other fellow indie game studios. If you want to send bags of money to Double Fine, send an email to [email protected]. | 计算机 |
2015-48/0319/en_head.json.gz/11480 | Microsoft Accidentally Issues DMCA Takedowns for U.S. Gov't, Google, Itself
27 comment(s) - last by lagomorpha.. on Oct 13 at 8:18 PM
Seemingly incompetent contractors lead to bizarre DMCA notices on Microsoft's behalf
In a fit of sloth, Microsoft Corp. (MSFT) has become among the companies to outsource/automate its Digital Millennium Copyright Act (DMCA) [PDF] (see Title 17 of the U.S. Code) takedown request process. Unfortunately, its partners' codes appear to be badly broken and posting a whole host of false positives.
For those unfamiliar, the DMCA gives an apparatus where companies can send requests to search engine firms like Google Inc. (GOOG), demanding they remove certain search results that are believed to contain "stolen" intellectual property. By blacklisting sites, companies can stop users from finding them and (in theory) halt the spread of the "stolen" work.
Such requests are often abused. Google claims 1/3rd of takedown requests are not valid copyright claims and the three fifths target a competitor's webpage.
But in Microsoft's case the abuse appears to be accidental.
In fact Microsoft's third-party DMCA takedown contractor Marketly llc, asked Google to remove "bing.com" from its search results 11 times. Microsoft's contractor also asked Google 335 times to take down its own homepage, on Microsoft's behalf.
In a testimonial on its homepage Marketly quotes Microsoft as pleased with its performance, quoting, "Marketly has engineered solutions that address today’s anti-piracy challenges, producing quantifiable results for Microsoft. We are pleased with Marketly’s responsiveness. They have been very easy to work with. – Online Piracy Senior Program Manager, Microsoft Corporation..."
Thanks to another partner -- LeakID, a "digital agency... founded by experts from the world of radio, television and internet" -- Microsoft also became the only party to request the takedown of U.S. Environmental Protection Agency and U.S. Department of Health and Human Service webpages.
Microsoft wants to take down the U.S. government -- or at least some of its webpages.
[Image Source: Microsoft]
Microsoft was one of only two copyright owners to try to take down a U.S. National Institutes of Health webpage, as well.
Those takedowns were among the high profile targets of a July 27, 2012 takedown request list on Google's clearinghouse of takedown information and chronicled by chillingeffects.org – a collaboration between the Electronic Frontier Foundation (EFF) and various law school professors. Among other high profile targets of Microsoft's/leakid's July scattershot include a number of news sites, such as BBC News, CBS Corp. (CBS), Rotten Tomatoes, TechCrunch, Time Warner Inc.'s (TWX) CNN, ScienceDirect, RealClearPolitics, and The Huffington Post (among others).
Leakid has tried, on Microsoft's behalf, to take down Wikipedia.org 4 times without success.
Microsoft and DMCA cat have a lot in common. [Image Source: Error Access Denied]
Microsoft's contractors have sent out nearly 5 million takedown requests to Google alone, so it's easy to note how such sloppy errors could occur, though you'd think the partners could be a bit smarter with their filtering.
Sadly Microsoft is not alone in its display of DMCA insanity. Just ask convicted tax evader Gary Quintinsky who tried to take down the U.S. Internal Revenue Service's homepage. That said, Microsoft and its "cronies" appear to be leading the way in bogus takedown requests.
Sources: Google [Transparency Report], Chilling Effects Comments Threshold -1
RE: It's not really a laughing matter.
While that's likely true, it doesn't mean it cannot happen. A bug from a corner case in the code that nobody thought was possible or nobody thought of. Happens all the time even with thorough testing, that's why many products ship and get patched later. Parent
We are not talking about a product here.But liberty.Know the difference.Learn the difference. Parent
darkhawk1980
Just because it has classically worked like that (put it out there, patch it/fix it later), doesn't mean that it's the correct way of doing it. There's a reason that PC gaming has slowly been dying, and it sadly doesn't have to do with piracy or cost, it's the lack of well developed games.Consider the fact that for each and every DMCA takedown, Google has to invest time and money into investigating every single one. In effect, it's a way for Microsoft (and others) to bleed Google dry, one drop at a time.Personally, I think some kind of fine system should be in place for improper DMCA takedowns. The current system has provides all the power for the issuer of the takedown, and none for the entity being issued. Make it enough that it encourages companies to double check their data before issuing it. $10k per improper takedown? I don't know, but that would be a good start. Parent
free speech can and should not be "patched later"If the system has a bug it doesn't matter. Bogus takedown attempts should be fined at the minimum, and companies with a high percentage of bogus attempts should be investigated for fraud and slander.Fines from such attempts should be used towards supplementing legal assistance of those appealing requests.to continue discouragement, fines should be progressive and multiply.There should not be a "good enough" or "close enough" attitude on this matter. The internet is already at enough risk. Parent
foolsgambit11
So you just have the software flag sites for verification by an actual person before the takedown notice is filed, instead of automatically sending takedown notices for any site it flags.In fact, I think they should ban automated takedown notices completely. There should have to be a human verification prior to the notice, and their name should be on the takedown, so somebody is responsible for the repercussions of fraudulent notices. Parent
Tech Executives Agree U.S. Copyright Law is Making a Horrible Mess
UW Study Frames Network Printer, Misc Hardware for DMCA Complaints
Blog Community Launches "F U AP" Campaign In Response to AP DMCA Takedowns
Cowboys DMCA Animal Cruelty Videos, Legal Battle Ensues | 计算机 |
2015-48/0319/en_head.json.gz/12335 | Share Share StumbleUpon
Creating a Logotype Depends More on the Means Than the End Branding, How To, Web Design ·
May 10, 2010 The process of designing a type-based logo is similar to that of designing a shape-based logo. Both logos need to convey a message, do it quickly and appease the client’s taste.
All of these objectives can be accomplished by defining goals, favoring message over convention and questioning our assumptions as designers—even to the point of considering Arial or Helvetica.
Anyone who designs a logo faces many questions. What should it look like? In what formats will it be presented? Does a particular color scheme need to be followed? As rough drafts are refined, the urge to find a general “solution” overrides the importance of these initial questions, which often end up neglected. When design becomes a question of preference, the end result is debatable.
Graphic design is a process of solving problems through visual communication. The process of designing a logo can be regarded as a series of steps that solves a series of questions. This article tells the story of a process that focuses on those questions. What’s the Project?
Smalls, Middleton & Bigman, a fictional law firm, hires a professional design company to develop its company logo. Problems begin with its initial requirements:
It must be easily recognizable. It should work at all sizes, including for business cards, letterheads and billboards. It should look professional. These could apply to any logo for any company. So, the designer asks for more information about the company itself.
In what specialties does it excel? What sets it apart from competitors? What brings clients back for business? Smalls, Middleton & Bigman is an aggressive new firm specializing in regional real estate deals. Its owners want to make a dent in the established market. Most of its staff were born and raised in the area that it covers. They are locals who understand the region’s history and politics and could rattle off a list of the best barbecue places in town. Although the principal partners have many contacts, the firm has no repeat business because it has no clients yet.
Everyone involved agrees that starting with the right logo is important, especially in a market with 20-year veterans who advertise actively. While the competition uses law books and scales of justice in its imagery, SM&B wants to emphasize its memorable names.
The designer immediately sets out to create a logotype.
The Importance of Shape
A logotype is a graphical trademark that uses type as its primary or only element. Like an icon, it expresses a message, but with letterforms alone. A logotype should communicate the name of the company and reflect its personality.
The temporary letterhead created by the firm’s secretary is quickly rejected.
Seasoned type designers might roll their eyes at the sight of Arial, Times New Roman or Papyrus (or of a letterhead created in MS Word), but that response smacks of snobbery.
Which of the above logos tells people what the law firm specializes in? Which one sets the firm apart from its competitors? When we ask whether these are solutions to problems, then we’re using design as a means of problem-solving.
The examples above aren’t logotypes. They’re just text. How do we tell people non-verbally what the law firm does? That is, not with text alone.
“It’s Ugly” Isn’t Reason Enough
What about Arial, Times New Roman and Comic Sans makes designers cringe? It’s not necessarily the letterforms. Helvetica, for example, is a well-designed geometric typeface that dates back to 1957.
At a glance, Helvetica is rather plain. But that’s because we’re used to it. Look again:
The uppercase A steps outside of its container on the left to accommodate its acute point. The round counters in b, c, d, g, o, p and q match precisely. The ascenders and feet have a uniform width, even when they end at an angle or curve. The x-height of most letters may as well have been cut with a razor. There are only three variations of diagonal angles. Overall, it’s hard to imagine a more legible face (though some have tried). But its succe | 计算机 |
2015-48/0319/en_head.json.gz/12451 | (Redirected from S.M.A.R.T)
This article is about the computer drive monitoring system. For the mnemonic used in other contexts, see Smart.
S.M.A.R.T. (Self-Monitoring, Analysis and Reporting Technology; often written as SMART) is a monitoring system included in computer hard disk drives (HDDs) and solid-state drives (SSDs)[1] that detects and reports on various indicators of drive reliability, with the intent of enabling the anticipation of hardware failures.
When S.M.A.R.T. data indicates a possible imminent drive failure, software running on the host system may notify the user so stored data can be copied to another storage device, preventing data loss, and the failing drive can be replaced.
2 History and predecessors
3 Provided information
4 Standards and implementation
4.1 Lack of common interpretation
4.2 Visibility to host systems
5 Access
6 ATA S.M.A.R.T. attributes
6.1 Known ATA S.M.A.R.T. attributes
6.2 Threshold Exceeds Condition
7 Self-tests
Hard disk failures fall into one of two basic classes:
Predictable failures, resulting from slow processes such as mechanical wear and gradual degradation of storage surfaces. Monitoring can determine when such failures are becoming more likely.
Unpredictable failures, happening without warning and ranging from electronic components becoming defective to a sudden mechanical failure (which may be related to improper handling).
Mechanical failures account for about 60% of all drive failures.[2] While the eventual failure may be catastrophic, most mechanical failures result from gradual wear and there are usually certain indications that failure is imminent. These may include increased heat output, increased noise level, problems with reading and writing of data, or an increase in the number of damaged disk sectors.
A field study at Google covering over 100,000 consumer-grade drives from December 2005 to August 2006 found correlations between certain SMART information and actual failure rates. In the 60 days following the first uncorrectable error on a drive (SMART attribute 0xC6 or 198) detected as a result of an offline scan, the drive was, on average, 39 times more likely to fail than a similar drive for which no such error occurred. First errors in reallocations, offline reallocations (SMART attributes 0xC4 and 0x05 or 196 and 5) and probational counts (SMART attribute 0xC5 or 197) were also strongly correlated to higher probabilities of failure. Conversely, little correlation was found for increased temperature and no correlation for usage level. However, the research showed that a large proportion (56%) of the failed drives failed without recording any count in the "four strong S.M.A.R.T. warnings" identified as scan errors, reallocation count, offline reallocation and probational count. Further, 36% of drives failed without recording any S.M.A.R.T. error at all, except the temperature, meaning that S.M.A.R.T. data alone was of limited usefulness in anticipating failures.[3]
PCTechGuide's page on SMART (2003) comments that the technology has gone through three phases:[4]
In its original incarnation SMART provided failure prediction by monitoring certain online hard drive activities.
A subsequent version improved failure prediction by adding an automatic off-line read scan to monitor additional operations.
The latest "SMART" technology not only monitors hard drive activities but adds failure prevention by attempting to detect and repair sector errors. Also, while earlier versions of the technology only monitored hard drive activity for data that was retrieved by the operating system, this latest SMART tests all data and all sectors of a drive by using "off-line data collection" to confirm the drive's health during periods of inactivity.
History and predecessors[edit]
An early hard disk monitoring technology was introduced by IBM in 1992 in its IBM 9337 Disk Arrays for AS/400 servers using IBM 0662 SCSI-2 disk drives.[5] Later it was named Predictive Failure Analysis (PFA) technology. It was measuring several key device health parameters and evaluating them within the drive firmware. Communications between the physical unit and the monitoring software were limited to a binary result: namely, either "device is OK" or "drive is likely to fail soon".
Later, another variant, which was named IntelliSafe, was created by computer manufacturer Compaq and disk drive manufacturers Seagate, Quantum, and Conner.[6] The disk drives would measure the disk’s "health parameters", and the values would be transferred to the operating system and user-space monitoring software. Each disk drive vendor was free to decide which parameters were to be included for monitoring, and what their thresholds should be. The unification was at the protocol level with the host.
Compaq submitted IntelliSafe to the Small Form Factor (SFF) committee for standardization in early 1995.[7] It was supported by IBM, by Compaq's development partners Seagate, Quantum, and Conner, and by Western Digital, which did not have a failure prediction system at the time. The Committee chose IntelliSafe's approach, as it provided more flexibility. The resulting jointly developed standard was named SMART.
That SFF standard described a communication protocol for an ATA host to use and control monitoring and analysis in a hard disk drive, but did not specify any particular metrics or analysis methods. Later, "SMART" came to be understood (though without any formal specification) to refer to a variety of specific metrics and methods and to apply to protocols unrelated to ATA for communicating the same kinds of things.
Provided information[edit]
The technical documentation for SMART is in the AT Attachment (ATA) standard. First introduced in 2004,[8] it has undergone regular revisions,[9] the latest being in 2008.[10]
The most basic information that SMART provides is the SMART status. It provides only two values: "threshold not exceeded" and "threshold exceeded". Often these are represented as "drive OK" or "drive fail" respectively. A "threshold exceeded" value is intended to indicate that there is a relatively high probability that the drive will not be able to honor its specification in the future: that is, the drive is "about to fail". The predicted failure may be catastrophic or may be something as subtle as the inability to write to certain sectors, or perhaps slower performance than the manufacturer's declared minimum.
The SMART status does not necessarily indicate the drive's past or present reliability. If a drive has already failed catastrophically, the SMART status may be inaccessible. Alternatively, if a drive has experienced problems in the past, but the sensors no longer detect such problems, the SMART status may, depending on the manufacturer's programming, suggest that the drive is now sound.
The inability to read some sectors is not always an indication that a drive is about to fail. One way that unreadable sectors may be created, even when the drive is functioning within specification, is through a sudden power failure while the drive is writing. Also, even if the physical disk is damaged at one location, such that a certain sector is unreadable, the disk may be able to use spare space to replace the bad area, so that the sector can be overwritten.[11]
More detail on the health of the drive may be obtained by examining the SMART Attributes. SMART Attributes were included in some drafts of the ATA standard, but were removed before the standard became final. The meaning and interpretation of the attributes varies between manufacturers, and are sometimes considered a trade secret for one manufacturer or another. Attributes are further discussed below.[12]
Drives with SMART may optionally maintain a number of 'logs'. The error log records information about the most recent errors that the drive has reported back to the host computer. Examining this log may help one to determine whether computer problems are disk-related or caused by something else (error log timestamps may "wrap" after 232 ms = 49.71 days[13])
A drive that implements SMART may optionally implement a number of self-test or maintenance routines, and the results of the tests are kept in the self-test log. The self-test routines may be used to detect any unreadable sectors on the disk, so that they may be restored from back-up sources (for example, from other disks in a RAID). This helps to reduce the risk of incurring permanent loss of data.
Standards and implementation[edit]
Lack of common interpretation[edit]
Many motherboards display a warning message when a disk drive is approaching failure. Although an industry standard exists among most major hard drive manufacturers,[4] there are some remaining issues and much proprietary "secret knowledge" held by individual manufacturers as to their specific approach. As a result, S.M.A.R.T. is not always implemented correctly on many computer platforms, due to the absence of industry-wide software and hardware standards for S.M.A.R.T. data interchange.[citation needed]
From a legal perspective, the term "S.M.A.R.T." refers only to a signaling method between internal disk drive electromechanical sensors and the host computer. Hence, a drive may be claimed by its manufacturers to implement S.M.A.R.T. even if it does not include, say, a temperature sensor, which the customer might reasonably expect to be present. Moreover, in the most extreme case, a disk manufacturer could, in theory, produce a drive which includes a sensor for just one physical attribute, and then legally advertise the product as "S.M.A.R.T. compatible".[citation needed]
Visibility to host systems[edit]
Depending on the type of interface being used, some S.M.A.R.T.-enabled motherboards and related software may not communicate with certain S.M.A.R.T.-capable drives. For example, few external drives connected via USB and Firewire correctly send S.M.A.R.T. data over those interfaces. With so many ways to connect a hard drive (SCSI, Fibre Channel, ATA, SATA, SAS, SSA, and so on), it is difficult to predict whether S.M.A.R.T. reports will function correctly in a given system.
Even with a hard drive and interface that implements the specification, the computer's operating system may not see the S.M.A.R.T. information because the drive and interface are encapsulated in a lower layer. For example, they may be part of a RAID subsystem in which the RAID controller sees the S.M.A.R.T.-capable drive, but the main computer sees only a logical volume generated by the RAID controller.
On the Windows platform, many programs designed to monitor and report S.M.A.R.T. information will function only under an administrator account. At present, S.M.A.R.T. is implemented individually by manufacturers, and while some aspects are standardized for compatibility, others are not.
Access[edit]
For a list of various programs that allow reading of Smart Data, see Comparison of S.M.A.R.T. tools.
ATA S.M.A.R.T. attributes[edit]
Each drive manufacturer defines a set of attributes,[14][6] and sets threshold values beyond which attributes should not pass under normal operation. Each attribute has a raw value, whose meaning is entirely up to the drive manufacturer (but often corresponds to counts or a physical unit, such as degrees Celsius or seconds), a normalized value, which ranges from 1 to 253 (with 1 representing the worst case and 253 representing the best) and a worst value, which represents the lowest recorded normalized value. Depending on the manufacturer, a value of 100 or 200 will often be chosen as the initial normalized value.[citation needed]
Manufacturers that have implemented at least one SMART attribute in various products include Samsung, Seagate, IBM (Hitachi), Fujitsu, Maxtor, Toshiba, Intel, sTec, Inc., Western Digital and ExcelStor Technology.
Known ATA S.M.A.R.T. attributes[edit]
This article possibly contains original research. Please improve it by verifying the claims made and adding inline citations. Statements consisting only of original research should be removed. (September 2007)
The following chart lists some S.M.A.R.T. attributes and the typical meaning of their raw values. Normalized values are always mapped so that higher values are better (with only very rare exceptions such as the "Temperature" attribute on certain Seagate drives[15]), but higher raw attribute values may be better or worse depending on the attribute and manufacturer. For example, the "Reallocated Sectors Count" attribute's normalized value decreases as the count of reallocated sectors increases. In this case, the attribute's raw value will often indicate the actual count of sectors that were reallocated, although vendors are in no way required to adhere to this convention.
As manufacturers do not necessarily agree on precise attribute definitions and measurement units, the following list of attributes should be regarded as a general guide only.
Higher raw value is better
Lower raw value is better
Critical: pink colored row
Potential indicators of imminent electromechanical failure
Attribute name
Read Error Rate
(Vendor specific raw value.) Stores data related to the rate of hardware read errors that occurred when reading data from a disk surface. The raw value has different structure for different vendors and is often not meaningful as a decimal number.
Throughput Performance
Overall (general) throughput performance of a hard disk drive. If the value of this attribute is decreasing there is a high probability that there is a problem with the disk.
Spin-Up Time
Average time of spindle spin up (from zero RPM to fully operational [milliseconds]).
Start/Stop Count
A tally of spindle start/stop cycles. The spindle turns o | 计算机 |
2015-48/0319/en_head.json.gz/13971 | Sign in Register GOG Sale: 50% off Witcher 3! 80% off D&D classic games (Baldur's Gate, Icewind Dale, etc) Contribute Frank O'MalleyMainCreditsBiographyPortraitsDeveloper Biographyus.playstation.comFrank O'Malley brings more than 35 years of business and sales experience in the videogame industry to his position as vice president, sales, Sony Computer Entertainment America. O’Malley is responsible for accelerating category momentum and expanding the company’s leadership position by forging sales relationships to increase and enhance sales, distribution, retail opportunities and channel marketing.O’Malley began his tenure at Sony Computer Entertainment America in 1995, when he joined the PlayStation sales team as the eastern regional sales manager. While at the company, O’Malley played a key role in developing and expanding the company’s retail and distribution infrastructure and relationships, which have evolved considerably since the introduction of the PlayStation game console in 1995. O’Malley’s background in the videogame industry started in 1981 when he joined Atari, as national accounts manager. He also gained significant sales experience while working with leading developer, Electronic Arts, where he escalated the company’s sales efforts by expanding retail and distribution channels. Prior to joining Sony Computer Entertainment America, O’Malley held various sales positions with Mindscape/Software Toolworks, a publisher of PC software and videogames. In addition to his extensive background in the videogame industry, O’Malley also spent several years in the toy industry as vice president, sales at Marx Toys.O’Malley obtained a bachelor of science degree from the University of Scranton and pursued graduate studies at both University of Scranton and the University of Bridgeport.
Contributed by Jeanne (75469) on Jan 04, 2005.
Frank O'Malley[add portrait] | 计算机 |
2015-48/0319/en_head.json.gz/14293 | MS-DOS is 30 something
Leave a reply Software | tags: Bill Gates, IBM, microsoft, ms-dos, pcs, Redmond July 28, 2011 by Edward Berridge. The name MS-DOS is 30 years old today. A fledgling outfit called Microsoft emerged from the Primordial slime and repackaged a disk operating system it bought from an outfit called Seattle Computer Products.
SCP was a hardware company owned and run by a bloke called Rod Brock. Before that Brock had been developing something he either called QDOS and 86-DOS, which he had been using to run on a CPU card based on Intel’s 8086 processor.
SCP had released the card in November 1979 and it shipped with an 8086-compatible version Microsoft’s Basic language interpreter. SCP decided it had to create its own OS for the card which it did in August 1980. QDOS stood for Quick and Dirty Operating System and it was written by Tim Paterson.
But the problem was that QDOS was a bit too much like Digital Research CP/M OS, but since it was supposed to be a stop gap while CP/M-86 was being developed that did not matter too much.
A young and freshly scrubbed Bill Gate showed up and paid SCP $25,000 for a licence to market and sell 86-DOS. What Gates did not tell SCP is that he had a deal with IBM to supply the operating system for the hardware giant’s first personal computer. However, he never told SCP and Paterson that he had a deal in the works with Big Blue until it acquired the OS.
Gates, with his idea of IBM’s plans and a belief that it would make shedloads of money, bought the software outright from SCP, for a further $50,000. SCP was allowed to continue to offer the OS with its own hardware. Paterson joined Microsoft.
The 27 July, 1981 is the day that Paterson gives for the handing over of the operating system, and the name was changed and released the next day. In August 1981, Big Blue released the IBM PC based on the OS and the rest was history. Search for: | 计算机 |
2015-48/0320/en_head.json.gz/4678 | The Fedora Project is an openly-developed project designed by Red Hat, open for general participation, led by a meritocracy, following a set of project objectives. The goal of The Fedora Project is to work with the Linux community to build a complete, general purpose operating system exclusively from open source software. Development will be done in a public forum. The project will produce time-based releases of Fedora about 2-3 times a year, with a public release schedule. The Red Hat engineering team will continue to participate in building Fedora and will invite and encourage more outside participation than in past releases. Fedora 15, a new version of one of the leading and most widely used Linux distributions on the market, has been released. Some of the many new features include support for Btrfs file system, Indic typing booster, redesigned SELinux troubleshooter, better power management, LibreOffice productivity suite, and, of course, the brand-new GNOME 3 desktop: "GNOME 3 is the next generation of GNOME with a brand new user interface. It provides a completely new and modern desktop that has been designed for today's users and technologies. Fedora 15 is the first major distribution to include GNOME 3 by default. GNOME 3 is being developed with extensive upstream participation from Red Hat developers and Fedora volunteers, and GNOME 3 is tightly integrated in Fedora 15." manufacturer website
1 dvd for installation on a x86 platform back to top | 计算机 |
2015-48/0320/en_head.json.gz/4935 | OracleBlog
Helpful ideas and solutions for the Oracle application enthusiast. Check out the archives
The Two Ways of Doing a Job
Whether it's deployment, development, performance tuning, troubleshooting or something else, there are two fundamentally different ways of doing your job: doing it fast and doing it completely.
Doing it Fast
Sometimes you can make a case for doing something fast. If you're dealing with something you're only going to do once, in a problem space you're either already deeply familiar with or couldn't care less about, and have a number of other competing priorities for your time, that's usually when a case for doing something fast can be made.
Doing something fast usually means trying a few quick ideas, either from your own toolbelt or something you saw on this site or somewhere else, and hacking away until it works, whether you understand what you're doing or not. It's certainly the most time-efficient way of doing your job, at least in the short-term, and at some risk. Unfortunately there are some long-term consequences to this approach.
Your solution may have introduced a problem somewhere else
The problem could potentially be a more severe one that blows up in your face
If you broke something there's no record of what you did exactly
If the problem re-appears at some other place or time you're no better equipped to deal with it. As a corollary, you're no better at your job tomorrow than you were today
Furthermore the system is no faster and no more stable tomorrow than it was today
Nobody else knows what you did, how to maintain it, nor how to fix it should it break
You're at risk of developing a reputation for shoddy work with your client, boss or peers
At the very least, you certainly didn't do anything to develop a good reputation
You probably didn't enjoy it
For these and other reasons I've always tried to do the job the other way: completely. There are times when the client or manager wants the job done quickly, but if the job is important enough to be done quickly, then it's certainly important enough to be done properly. If it's not important enough to do properly, then it's probably not important enough to do at all.
Doing it Completely
What exactly does it mean to do a job completely? Even when someone knows what they're doing and does an excellent job, the failure to properly document it, test it, communicate it and cross-train the team generally eliminates most of the advantages of doing the job in the first place. Given that completing the task is generally harder and more time-consuming than those other tasks, it's tragic how someone is doing 80% of the work for 20% of the benefit.
A frequent mantra we've always shared among us is "the job is never done unless it's tested and documented." There's no better way to describe the fundamental difference between doing a job fast and doing it completely. For instance, we don't allow a customer ticket to be closed, nor a project task to be marked off until it has been appropriately tested and the work has been appropriately recorded in the appropriate place.
It may sound like common sense, especially for someone who is sufficiently dedicated to their profession to read sites like this, but it's alarming how many we come across who simply don't properly research the technology with which they're working, document what they're doing, fully test their solution (with the users), all the while communication their work with those involved. Consequently these people generally can't do their jobs any better in year five than they could in year two, support systems no easier to maintain in year five than in year two, spend a lot of their time fighting avoidable fires (though some of them actually like this), generally dislike their jobs, but don't have the reputation or knowledge to get other opportunities.
Doing a job completely involves five primary differences to doing a job fast: investing the time to understand the technology, taking the steps to completely the job properly, documenting the work, testing it thoroughly, and communicating with others.
1. Understand the Technology
Doing a job completely means looking at every task as an opportunity to improve your knowledge in the relevant technology. For example, coming to a site like this is an excellent way to get a deeper understanding not only of how to solve a problem, but how the particular component in question works. Get a few books, talk to a few experts, work with the technology yourself (in a private development environment, initially) and develop a thorough understanding. You originally paid thousands of dollars for your education, here's your chance to get it on the job for free (or sometimes even get paid for it).
You chose this profession for a reason - you love it. You should really love the deeper understanding of the technology this investment in time and effort will give you. Did you really get into this field to hack blindly at quick fixes? To apply other people's solutions that you don't even understand?
A thorough understanding will also help you do your job faster and better in the future, finding even better solutions to even tougher challenges, all the while reducing the chances of making mistakes.
2. Do the Job Properly
Every time you do a job the goal has to be to leave the system more robust, maintainable, faster and more error-free than when you found it, and by the largest degree possible.
For example, when writing code, including good exception-handling, inline documentation, and instrumentation (debug messages). Write your code so that it is reusable in other systems. Furthermore, load the code into source control, and put good comments when you check it in. Conduct a code review with your team when you're done. There are any number of ways to improve the code, these are just suggestions (and here are a few more tricks to | 计算机 |
2015-48/0320/en_head.json.gz/7585 | 483 projects tagged "Operating Systems"
CD-Based (114)
Embedded Systems (30)
Floppy-Based (20)
Init (11)
Emulators (9)
DFSG (1)
Rebol (1)
SCons (1)
Jari OS (1)
i5/OS (1)
System Configuration Collector for Windows
collects configuration data
from Windows systems and compares the data with
the previous run.
Differences are added to a logbook, and all data
can be sent to the
server part of SCC.
GPLUtilitiesMonitoringDocumentationSystems Administration
ClearOS is an integrated network server gateway solution for small and distributed organizations. The software provides all the necessary server tools to run an organization including email, anti-virus, anti-spam, file sharing, groupware, VPN, firewall, intrusion detection/prevention, content filtering, bandwidth management, multi-WAN, and more. You can think of it as a next generation small business server. Through the intuitive Web-based management console, an administrator can configure the server software along with integrated cloud-based services.
GPLFiltersServerFirewallOperating Systems
Tor-ramdisk is a uClibc-based micro Linux distribution whose only purpose is to host a Tor server in an environment that maximizes security and privacy. Tor is a network of virtual tunnels that allows people and groups to improve their privacy and security on the Internet. Security is enhanced in tor-ramdisk by employing a monolithically compiled GRSEC/PAX patched kernel and hardened system tools. Privacy is enhanced by turning off logging at all levels so that even the Tor operator only has access to minimal information. Finally, since everything runs in ephemeral memory, no information survives a reboot, except for the Tor configuration file and the private RSA key, which may be exported and imported by FTP or SSH.
GPLv3InternetSecurityCommunicationsNetworking
musl
musl is a new implementation of the standard library for Linux-based systems. It is lightweight, fast, simple, free, and strives to be correct in the sense of standards-conformance and safety. It includes a wrapper for building programs against musl in place of the system standard library (e.g. glibc), making it possible to immediately evaluate the library and build compact statically linked binaries with it.
MITLibrariesEmbedded SystemsOperating Systemssoftware deployment
ALT Linux is a set of Linux distributions that are
based on Sisyphus, an APT-enabled RPM package
repository that aims to achieve feature
completeness, usability, and security in a
sensible and manageable mixture. GPLSecurityLinux DistributionsOperating Systems
BitRock InstallBuilder allows you to create
easy-to-use multiplatform installers for Linux
(x86/PPC/s390/x86_64/Itanium), Windows, Mac OS X, FreeBSD, OpenBSD, Solaris (x86/Sparc), IRIX, AIX, and HP-UX applications. The generated application installers have a native look-and-feel and no external dependencies, and can be run in GUI, text, and unattended modes. In addition to self-contained installers, the installation tool is also able to generate standalone RPM packages.
SharewareSoftware DevelopmentUtilitiesDesktop EnvironmentSystems Administration
Ubuntu Privacy Remix
Ubuntu Privacy Remix is a modified live CD based on Ubuntu Linux. UPR is not intended for permanent installation on a hard disk. The goal of Ubuntu Privacy Remix is to provide an isolated working environment where private data can be dealt with safely. The system installed on the computer running UPR remains untouched. It does this by removing support for network devices as well as local hard disks. Ubuntu Privacy Remix includes TrueCrypt and GnuPG for encryption and introduces "extended TrueCrypt volumes".
GPLSecurityOffice/BusinessCryptographyLinux Distributions
amforth
amforth is an extendible command interpreter for the Atmel AVR ATmega microcontroller family. It has a turnkey feature for embedded use as well. It does not depend on a host application. The command language is an almost compatible ANS94 forth with extensions. It needs less than 8KB code memory for the base system. It is written in assembly language and forth itself.
GPLv2Software DevelopmentScientific/EngineeringHardwareOperating System Kernels
STUBS and Franki/Earlgrey Linux
The STUBS Toolchain and Utility Build Suite is a
set of scripts which, together with a set of
pre-written configuration files, builds one or
more software packages in sequence. STUBS is
designed to work in very minimal environments,
including those without "make", and URLs are
included so source and patches can be downloaded
as necessary. Configuration files and scripts are
provided which create boot media for
Franki/Earlgrey Linux (one of several example
busybox- and uClibc-based Linux environments) and
the intention is that STUBS should be able to
rebuild such an environment from within itself.
GPLUtilitiesLinux DistributionsCD-BasedOperating Systems
G4L is a hard disk and partition imaging and cloning tool. The created images are optionally compressed, and they can be stored on a local hard drive or transferred to an anonymous FTP server. A drive can be cloned using the "Click'n'Clone" function. G4L supports file splitting if the local filesystem does not support writing files larger than 2GB. The included kernel supports ATA, serial-ATA, and SCSI drives. Common network cards are supported. It is packaged as a bootable CD image with an ncurses GUI for easy use.
GPLInternetSystems AdministrationArchivingbackup
A program that backs up and restores data.
A command-line or HTTP CGI hashing utility. | 计算机 |
2015-48/0320/en_head.json.gz/11081 | Last year, Hewlett Packard Company announced it will be separating into two industry-leading public companies as of November 1st, 2015. HP Inc. will be the leading personal systems and printing company. Hewlett Packard Enterprise will define the next generation of infrastructure, software and services.
Public Sector eCommerce is undergoing changes in preparation and support of this separation. You will still be able to purchase all the same products, but your catalogs will be split into two: Personal systems, Printers and Services and Servers, Storage, Networking and Services. Please select the catalog below that you would like to order from.
Note: Each product catalog has separate shopping cart and checkout processes.
Personal Computers and Printers
Select here to shop for desktops, workstations, laptops and netbooks, monitors, printers and print supplies Server, Storage, Networking and Services
Select here to shop for Servers, Storage, Networking, Converged Systems, Services and more.
Privacy Statement | Limited Warranty Statement | Terms of Use ©2015 Hewlett Packard Development Company, L.P | 计算机 |
2015-48/0320/en_head.json.gz/12443 | BioShock Infinite character 'highly altered' after talking with religious team members
BioShock Infinite creator Ken Levine says that he changed the portrayal of a character after speaking with religious team members who had concerns.
The name BioShock Infinite had us scratching our heads upon announcement, but it might just be the most appropriate title for the game. The more series creator Ken Levine talks about it, the more it seems to be about everything. It carries some religious themes along with its other big ideas, but Levine says some of those were "highly altered" after he took some time to talk with religious members of the studio."It's very important for me to understand a certain aspect of the religiosity of the world," Levine told Official PlayStation Magazine. "That's where I tune in as a non-religious person. ... I had some very valuable conversations. One of the characters in the game was highly altered based upon some very interesting conversations I had with people on the team who came from a very religious background, and I was able to understand they were kind of upset about something."He says the team doesn't shy away from difficult subjects, but wants to treat them with the proper amount of weight. "I think that we had a similar conversation about Bioshock 1," he said. "It involves infanticide, I don't think there's a larger taboo in the world. There were people who were very nervous about that. We didn't have that because we thought it would be cool. My feeling was if it's not just there to be exploitive, if it's true to the story and you’re telling something that you think is honest, then everything has a place."Levine says this doesn't mean the story has changed, just refined. "What I said to them was, 'I'm not going to change anything to get your approval, but I think I understand what you're saying and I think I can do something that's going to make the story better, based on what you said.' So I did that, and I'm grateful for them bringing in their perspective. The last thing I wanted to do was change something because it offends somebody, but the thing they pointed out was making it a lesser story." Steve Watts
99bestgame
Looking to get this game for free, Download from -> Mygame9.com || Looking to get this game for free, Download from -> M... Jobos
Okay. This article tells us a whole lot of nothing, really. "I changed something in a game after considering religion"... Cikatriz
I hate you all. Visit Chatty to Join The Conversation
BioShock Series | 计算机 |
2015-48/0320/en_head.json.gz/12647 | WebDev Jobs Animated GIFs
HTML 4.01 Tags
Dreamweaver Expression Web
Web Development Business Issues
Java Programming ... From the Grounds Up
Java Programming ... From the Grounds Up by Mark C. Reynolds Reprinted from Web Developer® magazine, Vol. 2 No.1 Spring 1996 © 1996 With Java, it's possible to write some very sophisticated applets with a relatively small amount of code. Here's how. Wildly popular due to its interactive multimedia capabilities, Java programming leads the list of Internet development skills in current commercial demand. In this first half of our two-part tutorial on Java applet development, we explore the essentials of Java's components. These include how Java development tools relate to each other and--most importantly--how they are used to provide content that executes on the client side instead of on your server. Before Sun Microsystems introduced Java, most Web interactivity was accomplished via CGI (Common Gateway Interface) scripting. This is frequently utilized in forms or guestbooks where users type entries into text fields, then submit this information via their browser back to a host server. The host server then passes the information to an external program running on the Web server's machine. The output of this external program is then passed from the server back to the browser. CGIs must execute at least one round trip from the browser to the server and back. In contrast, when a Java-compatible browser accesses a Java-powered page, an applet--a small program written in Java--is copied to the browser's machine and executes there. It does not execute on the server the way a CGI program does. This local execution makes possible a much greater level of Web interaction and multimedia effects, unhampered by HTML restrictions or network bandwidth. Java programs can display slick animations, invite users to play games, show step-by-step tutorial instructions, or run demonstration versions of computer software. When a browser accesses a standard HTML page, the result is usually a static display. When it runs a Java applet, however, the results are limited only by the creativity of the applet's developer. The applet is a nearly independent application running within the browser. Getting Started Java is an object-oriented programming language that resembles a simplified form of C++. Java code displays graphics, accesses the network, and interfaces with users via a set of capabilities--known as classes--that define similar states and common methods for an object's behavior. Unlike other programming languages, though, Java programs are not compiled into machine code; instead, they are converted into an architecture-neutral bytecode format. This collection of bytes represents the code for an abstract Java virtual machine (VM). In order for these bytes to execute on a physical machine, a Java interpreter running on that physical machine must translate those bytes into local actions, such as printing a string or drawing a button. To run Java applets, you'll need a Java-enabled browser (such as Sun's HotJava, Netscape 2.0 or greater, or Internet Explorer 3.0 or greater) or you can use the appletviewer utility in Sun's Java Development Kit (JDK). The JDK also includes an interpreter for standalone Java applications (called simply java), as well as a debugger and compiler, called jdbg and javac respectively. Java applets use a subset of the full Java VM, with a variety of features disabled for security reasons. You can add Java applets to your Web pages with the <APPLET> tag, which can include attributes for the applet's height, width, and other parameters. Java-capable browsers treat Java applets like other media objects in an HTML document: they are loaded with the rest of the page, then verified and executed. Java Classes and Methods Java utilizes the basic object technology found in C++. In a nutshell, the Java language supports the idea of data packaging, or encapsulation, through its mechanism. A Java class is an association between data elements and/or functions, much like an extended struct in C (or a C++ class). In fact, there are no structs in Java at all; the mechanism of grouping together similar elements is achieved only by creating a class. The functional members of a class are referred to as the class methods. Just as a C struct may contain other structs within it, a Java class may be built on top of another class--although only one at a time--and inherit that class's behaviors as well. Java has its own syntax for describing methods and classes. It supports public class members, which are visible outside the class; protected members, which are visible only within the class and its subclasses; and private members, which are only visible within that particular class. Java supports abstract (virtual) classes, in which some or all of the member functions are declared, but not defined--they have no function body, so that only subclasses which fully define those functions may be used. If you have some experience with C++ programming, many of these concepts will be familiar to you. However, there are several striking differences between C++ and Java. Much of the implicit behavior that C++ takes for granted is absent in Java. For example, there are no default constructors: a Java program must explicitly call the operator new to create a new instance of a class. In addition, arithmetic operators such as "+" or "= =" may not overload in Java. There is no way for the programmer to extend the behavior of "+" beyond what Java provides intrinsically. Another highly visible departure from C and C++ is that there are no pointers (and logically, no pointer arithmetic) in Java. [ Java Programming ... From the Grounds Up: Part 2 > ] | 计算机 |
2015-48/0320/en_head.json.gz/12872 | International Nuclear Information System (INIS), IAEA Library & SDSG
No. 15, December 2013
INIS Thesaurus: new terms and new interface
A new version of the INIS Interactive Multilingual Thesaurus has been developed and is available at: http://nkp.iaea.org/Thesaurus.
This new Thesaurus version continues the commitment of the Nuclear Information Section (NIS) to offer users an enhanced experience and greater support for new technologies and devices.
Simplified user interface
The user interface has been simplified and is now consistent with that of the INIS Collection Search. It is especially well formatted for use with tablet computers. A user simply chooses a language of input, and begins typing in the text box provided. A list of terms matching the user input appears. The user may either select from the list, or continue typing. When a term is selected, the date the term was added to the INIS Thesaurus, a definition (if one exists), and related terms appears. Each of the related terms may then be selected, to provide that term’s wordblock.
Multilingual support
Descriptors may be entered in Chinese, English, French, German, Japanese, Russian, and Spanish. After selecting a term, translations are also available in each of these languages.
New Terms
The INIS Thesaurus is a controlled terminological knowledge base that has been developed over the years through the contribution of INIS Member States in all areas of peaceful applications of nuclear science and technology, which is also the subject scope of the INIS Collection. The thesaurus is primarily used for subject indexing of input into the INIS system and for retrieval of information from the database. Thanks to the vital support of the INIS Member States, the thesaurus has been translated into eight languages (i.e all IAEA official languages plus German and Japanese) and is available online to assist our global users as a tool for retrieval and for general reference services. It is a dynamic information resource that is continually updated to cater to new developments of terminologies in nuclear science and technology. As of the end of October 2013, the INIS Thesaurus contains a total of 30,746 terms. Some of the recent additions to the thesaurus include terms from diverse subjects such as: FUKUSHIMA DAIICHI NUCLEAR POWER STATION, LEPTOQUARKS, RADIOEMBOLIZATION, SMART GRIDS and FLEROVIUM.
Brian Bales
Acting Group Leader
Systems Development and Support Group
Bekele Negeri
Nuclear Information Specialist, INIS
Nuclear Information Section (NIS)
Department of Nuclear Energy
Vienna International Centre, PO Box 100
1400 Vienna, Austria
Email: NIS - Section Head Office
Websites: INIS & IAEA Library
@INISsecretariat | 计算机 |
2015-48/0320/en_head.json.gz/13170 | How to Go Back to Gmail's Older Version
If you have the new version of Gmail, but you don't like it, there are some ways to go back to the old version. Maybe you don't have a fast Internet connection and Gmail suddenly feels slower, maybe there are too many bugs or you love a Gmail-related extension or Greasemonkey script that suddenly doesn't work.Gmail provides a link to the old version (http://mail.google.com/mail/?ui=1), but the change is not persistent, so you'll still see the new version the next time you go to Gmail. And even if you bookmark the link, some Gmail-related plug-ins will still not work.Because the new version is only available in Firefox 2 and Internet Explorer 7, another way to revert to the standard Gmail would be to use another browser, like Opera or Safari. But that's not very convenient or practical.So what's the best solution? Change Gmail's interface language in the settings from English (US) to another language, like English (UK). You'll lose some features (creating Google Calendar events, PowerPoint viewer) and some names will be different (instead of Trash, you'll see Deleted Items), but these are minor changes.Gmail's new version will be rolled out to everyone in the coming weeks and will eventually replace the current version, but by the time it reaches everyone, Gmail will probably fix the performance issues and your favorite plug-ins will update their code. A Gmail API would be a much better idea for the future, because every change in Gmail's code can break a plug-in like Gmail Manager or some useful Greasemonkey scripts.Update: Obviously, this was just a temporary solution and it no longer works, since Gmail 2 is available for English (UK). If you still don't like the new version of Gmail, bookmark http://mail.google.com/mail/?ui=1, but that address won't be available indefinitely either.
Gmail, | 计算机 |
2015-48/0320/en_head.json.gz/13272 | / root / Linux Books / Red Hat/Fedora
stock:back order
release date:May 2006
Andrew Hudson, Paul Hudson
Continuing with the tradition of offering the best and most comprehensive coverage of Red Hat Linux on the market, Red Hat Fedora 5 Unleashed includes new and additional material based on the latest release of Red Hat's Fedora Core Linux distribution. Incorporating an advanced approach to presenting information about Fedora, the book aims to provide the best and latest information that intermediate to advanced Linux users need to know about installation, configuration, system administration, server operations, and security.
Red Hat Fedora 5 Unleashed thoroughly covers all of Fedora's software packages, including up-to-date material on new applications, Web development, peripherals, and programming languages. It also includes updated discussion of the architecture of the Linux kernel 2.6, USB, KDE, GNOME, Broadband access issues, routing, gateways, firewalls, disk tuning, GCC, Perl, Python, printing services (CUPS), and security. Red Hat Linux Fedora 5 Unleashed is the most trusted and comprehensive guide to the latest version of Fedora Linux.
Paul Hudson is a recognized expert in open source technologies. He is a professional developer and full-time journalist for Future Publishing. His articles have appeared in Internet Works, Mac Format, PC Answers, PC Format and Linux Format, one of the most prestigious linux magazines. Paul is very passionate about the free software movement, and uses Linux exclusively at work and at home. Paul's book, Practical PHP Programming, is an industry-standard in the PHP community. manufacturer website | 计算机 |
2015-48/0321/en_head.json.gz/2720 | The Game Design Forum HOME FEATURES BOOKS CONTACT
An Intro to Videogame Design History
This is the first section of a four-part essay on the history of videogame design. The Forum will soon be publishing the Reverse Design for Super Mario World. This article makes for a good introduction to that book because Super Mario World is best understood in its historical context. Super Mario World is a perfect example of a composite game, but that�s jumping ahead; we should start at the beginning. Originally the research for these articles began as a way to develop a new curriculum for game development students. The idea behind the curriculum is this: students of studio art, music, film, architecture and many other disciplines spend a lot of time learning about the history of their discipline. They gain a lot by that kind of study. It stands to reason that game design students might benefit by studying the evolution of their craft similarly. By first mastering the original roots of videogame design and then building upon those fundamentals, students can come out of their game design programs with a systematic understanding of design: how it is done, how it began and where it is going. And so, to begin, we look back to the earliest days of videogame design.
Now I want to throw out a disclaimer here: this is meant as a theoretical history of videogames that explains broad trends in the evolution of game design. This theory does not explain everything; it does not attempt to do so. There is a definite bias in this theory for mainstream games. Also, the theory is focused primarily upon console games until the late 1990s, at which point it applies to console and PC games more or less equally, although it still retains a mainstream bias.
The Arcade Era
The core principles of videogame design were codified between 1978 and 1984. Videogames, as a form, go back much farther than that; there were videogames before even Pong came out. Obviously, those games had designers. But starting in 1978 it became clear to game designers that there were some ways in which videogames were very special. It was in 1978 that Tomohiro Nishikado�s Space Invaders became a worldwide sensation, introducing videogames to a whole generation of people who had largely not played them before. Space Invaders featured a new and engaging difficulty structure. Because of a small error in the way the machinery of the game was built, the enemy invaders became incrementally faster when there were fewer of them on screen. This meant that every level would get progressively more challenging as it neared the end. Nishikado didn�t originally intend for this to happen, but he found that an accelerating challenge made the game much more interesting, so he kept it. To add to this effect, he also designed each level to start of slightly more difficult than the last, by moving the invader fleet one row closer to the player at the start. You can visualize the game�s difficulty curve like this:
In a certain sense, this challenge structure is videogame design. Almost every videogame since Space Invaders has employed this structure in one way or another. Certainly, Nishikado�s contemporaries were quick to imitate and adapt this structure to their own games.
What designers of the era had discovered was that they could treat challenge as something that could go both up and down in a regular fashion, as though it were moving along an axis. We can refer to this axis as the axis of obstacles. (Obstacles being the things that stand between the player and victory.) At this point, the level of difficulty in a game corresponded directly to the challenge presented by the obstacles in a game. If a designer made the enemies faster or the pitfalls larger, the game became exactly that much harder, with essentially no mitigation from other elements in the game. It�s easy to plot an axis of obstacles for an arcade game, because they have so few variables. For example, in Asteroids, there�s really only one obstacle: the number of flying objects on screen.
To understand the axis of obstacles for Asteroids is to understand the design as a whole. This was a time when games were much simpler, from a design perspective, than they are now. Games would become more complex very quickly, however. The next industry-changing evolution toward contemporary games came in 1980, when Pac-Man was released.
The axis of abilities followed on the heels of the axis of obstacles, although in their earliest forms the two were nearly indistinguishable. If we think of the axis of obstacles as a range of challenges that can move up or down, we can say that the axis of abilities is a range of abilities for the player avatar than can grow, shrink or simply change. The foundational example is Pac-Man�s power-up. We�ve all seen this one:
When Pac-Man gets the power pellet, he gains new abilities temporarily. For a brief period of time, Pac-Man no longer has to run from the enemy ghosts but instead can chase them. Most people are familiar with this power-up and how it works; many are not familiar with its subtle nuances, however. Pac-Man�s design is actually full of movement along the axis of abilities. Pac-Man�s movement speed increases for the first five levels, and then starts to decrease after level 21. The speed of the ghosts that chase him, on the other hand, goes up and stays up. Additionally, the duration of the power pellet effect slowly decreases. If anything, Pac-Man�s movement along the axis of abilities is there to make the game harder, not easier. Yes, the power-up is a helpful, tactical tool, but the effectiveness of that power pellet decreases in sync with a decay in Pac-Man�s speed, making the (necessary) help you get out of it less and less meaningful. This is basically a back door into the axis of obstacles. By subtracting player abilities over time, Pac-Man gets harder in the exact same way it would if the obstacles were increased.
This use of the axis of abilities as a kind of back door into the axis of obstacles is one that was very popular early on. Many games imitated or modified Pac-Man�s use of power-ups, but none did so more clearly than Galaga. An ordinary if well-executed shooter, Galaga featured a very simple power-up. Using a relatively easy maneuver, players could get two ships instead of one.
By doubling the player�s shooting ability, the game becomes fractionally less difficult, as long as the player doesn�t lose the power-up. If there is a more obvious power-up than this one, I have not encountered it. The trend is clear: designers of the early 80s were using the axis of abilities as a supplemental way of controlling the level of challenge in a game. The differences between the effects of the axis of obstacles and the axis of abilities were few.
Although Pac-Man is credited with being the origin of the powerup, it was a young designer named Shigeru Miyamoto who made powerups what they are today. Miyamoto�s idea was to treat the power-up as a way of changing the gameplay qualitatively rather than just making it easier or harder. His first game, Donkey Kong, features a power-up which accomplishes this effectively: the hammer. Donkey Kong is clearly a platformer; most of the game is spent running, jumping and climbing across platforms while avoiding deadly obstacles. When Jump Man picks up the hammer, however, something very important happens�the game stops being a platformer and becomes an action game.
With the hammer in hand, Jump Man loses most of his platforming abilities like jumping and climbing, and instead gains an action game ability: attacking with a weapon. For the duration of the power-up, the game crosses genres. The big revelation here for game designers was that, although the hammer was little more than a distraction, players liked it. Miyamoto and his colleagues realized that the axis of abilities wasn�t just a way of making the game more or less difficult. The axis of abilities could be a way of bringing in design elements from other genres to expand the gameplay possibilities and entertainment value of videogames.
Miyamoto�s discovery that the axis of abilities could allow designers to cross genres for a more engaging game would lead to a huge explosion in popularity for these games. In 1985 a new era began in videogame design: the era of composite games.
Next - The Compsite Era
All material copyright by The Game Design Forum 2014 | 计算机 |
2015-48/0321/en_head.json.gz/10615 | Search Main menuAbout
Publications and Talks
Contact Knight Foundation awards $5000 to best created-on-the-spot projects | MIT Center for Civic Media Andrew WhitacreCommunications DirectorAndrew conducts the communications efforts for MIT Comparative Media Studies/Writing and its research groups, including the Center for Civic Media from 2008 to 2015. A native of Washington, D.C., he holds a degree in communication from Wake Forest University, with a minor in humanities, as well as an M.F.A. in creative writing from Emerson College. His work includes drawing up and executing strategic communications plans, with projects such as website design, social media management and training, press outreach, product launches, fundraising campaign support, and event promotion.
MIT Center for Civic Media
Knight Foundation awards $5000 to best created-on-the-spot projects Submitted by Andrew on June 23, 2009 - 3:32pm One of the little gems that the Knight Foundation introduced at the Future of News and Civic Media conference last week was to award five grand to the best collaborative projects created at the conference. We thought it might be a tall order, what with everything else the attendees were doing, but boy did they ever respond.
Attendees pitched 19 brand-new projects, and three of them--TweetBill, Hacks and Hackers, and the WordPress Distributed Translation Plugin--won cold hard cash to develop the ideas further. And the creators can thank their fellow attendees, because everyone used Mako Hill's preferential voting tool Selectricity to vote on the spot.
About the winning projects...
TweetBill
TweetBill sends you notification via Twitter when a bill reaches the stage in the US Congress where it's useful for you to call your Congresscritter! Sign up, tell us where you live, choose your issues, and you will get a tweet when your representative is slated to vote on a bill, along with the rep's phone number.
See the prototype http://www.tweetbill.com
Team Members: Nick Allen, Pete Karl, Ryan Mark, Persephone Miel, Aron Pilhofer, Ryan Sholin, Lisa Williams
The problem: Scattered through the worlds of journalism and technology live a growing number of professionals interested in developing technology applications that serve the mission of journalism. Technologists are doing more and more things that are journalistic; journalists are doing things that are more and more technological. These people don’t have a platform or network through which they can share information, learn from one another or solve each other’s problems. These people are scattered in organizations such as IRE, ONA, SND – and are in both academia and industry.
Proposal: Establish “Hacks and Hackers,” a network of people interested in Web/digital application development and technology innovation supporting the mission and goals of journalism. This is NOT a new journalism organization (SPJ, ONA, IRE, ASNE, etc.) . In fact we would call it a “DIS-organization.” The goals of this network are: (a) Create a community of people in different disciplines who are interested in these topics; (b) Share useful information (e.g., a tutorial on how to install Drupal); (c) Networking; (d) Jobs; (e) Professional development; (f) Etc.
How this network will work: (a) We will establish an online network that will aggregate and link out to relevant information provided by members; (b) Membership costs $0.00; (c) We will establish a system through which contributions to the network are rewarded – for instance, via some kind of points system that rewards members for, for instance, solving one another’s technical problem or creating a great tutorial; (d) We will seek to build bridges between journalism and academia, generating interest among computer scientists in the problems of journalism and media and among journalists in the opportunities presented by technology.
Team Members: Aron Pilhofer, Rich Gordon WordPress Distributed Translation Plugin
Description: A WordPress plugin which extracts and divides text and meta-text from blog posts into segments that are delivered to The Extraordinaries smart phone application so that bi-lingual users can volunteer five minutes while waiting in line at the supermarket to help translate news articles and blog posts. The plugin would also reassemble the translated segments into a single blog post and, optionally, give credit to all involved translators.
Background: Global Voices is the largest volunteer translation community in the world, both in terms of volunteers and the number of working languages. (New York Times article here.) On a daily basis the community translates independent media between Indonesia, German, Spanish, French, Italian, Malagasy, Dutch, Portuguese, Swahili, Serbian, Macedonian, Arabic, Farsi, Bangla, Chinese, Japanese, Hindi, Hebrew, Russian, Albanian, and more. Developing a mobile interface to social translation would allow Global Voices and other organizations to recruit volunteer translators who don't have regular access to a desktop internet connection.
Background -- The Extraordinaries (http://www.BeExtra.org):
The Extraordinaries delivers micro-volunteer opportunities to mobile phones and web browsers that can be done on-demand and on-the-spot. Currently available as an iPhone® application through Apple's iTunes® store, The Extraordinaries enables organizations to connect with their supporters through these micro-volunteer opportunities, strengthening relationships while leveraging their "crowds" to complete real work such as image tagging, translation and research.
Team Members: David Sasaki, Jacob Colker
conferenceknc09Civic mediaKnightconfGoogle Plus One
Permalink Submitted by Anonymous on June 24, 2009 - 3:43pm Comment: The last idea is really great. Global Voices is a great site and a program to divide and disperse text to be translated in little batches sounds like a great way to make sites like Global Voices run more smoothly. The move towards not just base aggregation of content but rather editing, contextualizing, and organizing (as well as translating to increase accessibility) is a much needed advance. There are some great interviews with top journalists about the future of journalism at http://www.ourblook.com/component/option,com_sectionex/Itemid,200076/id,... which I have found useful in relation to these subjects.
Permalink Submitted by Lorraine on June 25, 2009 - 1:56am Comment: I really like the TweetBill application! It will hopefully get more people to become involved and let their voice be heard. I'm hoping it really catches on to let politicians know what we want at timely moments.
Permalink Submitted by Baby Pushchairs on June 28, 2009 - 7:41am Comment: These are just some examples of what's so great about the way the internet and technology in general are going to moving ahead for the genuine betterment of life. I can't wait to see what the next 5 to 10 years brings with the internet as the possibilities are just amazing- as Apple have illustrated with their platform of Apps used with the iPhone.
Finally- thanks for the details.. these were some great bits from the Future of News and Civic Media conference.
Permalink Submitted by Anonymous on December 22, 2009 - 5:11am Comment: Tweetbill sounds a great idea, unfortunately I'm over in the USA but I'd like to see a uk parliament version.
All content Attribution-ShareAlike 3.0 United States (CC BY-SA 3.0) unless otherwise noted.
A project of MIT Comparative Media Studies and the MIT Media Lab with funding from the John S. and James L. Knight Foundation, the Ford Foundation, the Open Society Foundations, and the Bulova-Stetson Foundation | 计算机 |
2015-48/0321/en_head.json.gz/10987 | The Internet is now a dominant tool for regular people
The Internet has succeeded in becoming a tool that many regular people turn to in lieu of alternatives for communicating and for finding information.
< Small Business and Web Sites The Value of Experience >
I've written a few essays that deal with the idea of computer applications that are "tools" (such as Thoughts on the 20th Anniversary of the IBM PC, Metaphors, Not Conversations, and The "Computer as Assistant" Fallacy). I think that examining aspects of people's use of tools is important to see where our use of computer technology will go.
There's an old saying that "When all you have is a hammer, everything looks like a nail." Sometimes it is said in a derogatory way, implying that the person is not looking for the "correct" tool. For example, people used to laugh at how early spreadsheet users did their word processing by typing their material into cells, one cell per line, rather than learn another, new product.
I think a more interesting thing to look at is what makes a tool so general purpose that we can logically (and successfully) find ways to use it for a wide variety of things, often ones not foreseen by its creators. Tools that can be used for many different important things are good and often become very popular.
I have lots of experience watching the early days of the spreadsheet. Many people tried to create other numeric-processing/financial forecasting products soon afterwards, but none caught on like the spreadsheet. Most of these tools were tuned to be better suited for particular uses, like financial time-series. What they weren't as well tuned for were free format, non-structured applications. What were successful were later generation spreadsheets that kept all of the free format features, and added additional output formatting options, such as commas and dollar signs, graphing, mixed fonts, and cell borders and backgrounds.
The automobile caught on especially well here in the US partially because of its general purpose nature. First used for recreation, taking you out to the "country" (wherever that may be), it could also be used in rural areas to go into town or visit friends, for commuting in suburban and urban areas, visiting people at great distances, as part of work (such as the old family doctor), as a means for status or "freedom", etc., etc. In our large growing country, no other means of transportation met as many needs during the years when we built up much of society around it.
The Internet successes
Given those general thoughts, let's look at applications of the Internet. Where have they become accepted and entrenched general purpose tools among regular people?
The first and most obvious accepted use is as communications tool with people you already know. Email and instant messaging has gone way past the early adopter phase. For many families, communities, and businesses, it has become one of the dominant forms of communication. Email is up there with telephone and visiting, and more and more is displacing physical mail and fax.
This is pretty amazing. It took the telephone years to reach this level of acceptance for such mundane uses. Fax never reached it for personal uses.
The second most obvious accepted use is as an information gathering tool. Research such as that published by the Pew Internet Project comes up with numbers like these: Over 50% of adult Internet users used the Internet (most likely the Web) for job-related research. On any given day, 16% of Internet users are online doing research. 94% of youth ages 12-17 who have Internet access say they use the Internet for school research. 71% of online teens say that they used the Internet as the major source for their most recent major school project or report. (From The Internet and Education.) During the 2000 Christmas season, 24% of Internet users (over 22 million people) went to the Web to get information on crafts and recipes, and to get other ideas for holiday celebrations. 14% of Internet users researched religious information and traditions online. (This is for an event that happens every year of their lives.) (From The Holidays Online.) 55% of American adults with Internet access have used the Web to get health or medical information. "Most health seekers treat the Internet as a vast, searchable library, relying largely on their own wits, and the algorithms of search engines, to get them to the information they need." (From The Online Health Care Revolution.) Of veteran Internet users (at least 3 years) 87% search for the answer to specific questions, 83% look for information about hobbies. (From Time Online.)
Anecdotal evidence I've seen: Driving directions web sites like Mapquest are becoming a preferred tool for drivers, supplementing other forms of getting directions. More and more people research vacations on the Web before committing to accommodations or activities. 50% of all tableservice restaurants have web sites (and it isn't so that you'll order a meal to be delivered by Fedex). Search engines are very popular, and people will switch to better ones because they know the difference. Web site addresses are replacing 800 numbers in advertising and public service announcement "for more information".
At the Basex KM & Communities 2001 West conference, IBM Director of Worldwide Intranet Strategy and Programs, Michael Wing, presented some statistics that show where things are going. In surveying IBM employees' feelings about what were the "Best" (most credible, preferred, and useful) sources of information, in 1997 their Intranet was listed a distant 6th, after the top co-workers, then manager, INEWS, senior executive letters, and external media. In 2000, the Intranet was tied for first with co-workers, and the Internet (outside IBM) had moved into a place just below external media and senior executive letters.
For many people, the general Internet is on par with other, older public information sources, and sources they have relationships with or an affinity for (certain web sites, people through email, etc.) are trusted even more. The huge rush of people to the Internet during times of tragedy or rapidly unfolding events that are of deep importance to them shows this. When you get a call from a friend, or a co-worker pokes their face into your office with some news, an awful lot of people go straight to the Internet to learn more.
So, I feel that the Internet has passed that magic point for most users (which is over half of the US population who can read) where it is one of those tools that they already know how to use, and will depend upon to do all sorts of things, often instead of using other "better" ways of getting things done.
Where the Internet hasn't come that far
In contrast to these successes in changing behavior to favor Internet usage, I don't believe that buying on the Internet has passed that point for most people. Amazon and similar ventures that rely on purely electronic Internet-based transactions have failed to become the way we'll buy everything from toothpaste to lawn chairs. Some people do, but not the majority for any large portion of their purchasing. (Of course, for researching the purchase, the Internet is becoming extremely important.) A few categories, like travel, have broken out into popular acceptance, but not to the level of communications or information seeking.
Also, it seems that the Internet has not passed that point to be a major tool for passive entertainment. While it has become key to getting information out about movies (and credited for creating the main "buzz" to launch some), few people go to "watch the Internet". It is used to transfer music, but then only for songs the user wants, not as a passive receiver from a dominating "service". Just as TV is being affected by the new generations of people who wield the remote control to flit from program to program as they see fit on dozens or more channels, the "user choice" view of the Internet, much like the use for searching, is what you mainly find. Somebody else doesn't tell you what's interesting -- you decide and then go to the Web to look for more about it.
The implications for those with information that they want others to find, or who want people to communicate with them, is that they should include the Internet in their plans. For communications, an email address is a minimum, though for some types of interactions an instant messaging screen name is also important. For disseminating information, a web site is extremely important, as is being findable through various means (either in search engines and directories, and/or through other means of providing information, such as on printed material or in general advertising in any medium or links from likely places on the Web).
As we find over and over again about new technologies, users choose what they want to use them for. The purveyors of the technology can advise, but they can't control. We should learn from what they gravitate to.
The Internet has succeeded in becoming a tool that many regular people turn to in lieu of alternatives for communicating and for finding information. It has become a new, often-used tool in their personal toolbox.
What's good is that these two uses, communications and finding information, have proven to be ones for which people willingly pay.
As further proof and insight into the use of Internet searching as a general purpose tool, read Richard W. Wiggins' fascinating article about the evolution of Google during the 9/11 tragedy: The Effects of September 11 on the Leading Search Engine.
Writings Home < Previous Next >
© Copyright 1999-2015 by Daniel Bricklin | 计算机 |
2015-48/0321/en_head.json.gz/11092 | Blocked Sites, No Longer Available in Google Search?
Last year, Google released a feature that allowed you to block sites from appearing in your search results. After clicking a result and going back to the search results page, Google displayed a special link next to the result for blocking the entire domain.The feature no longer seems to work: the "block" link is no longer displayed, the preferences page doesn't mention the feature and the blocked domains still appear in Google's results. The page that allows you to manage blocked sites is still available."When you're signed in to Google, you can block a specific website from appearing in your future search results. This is a helpful option when you encounter a site that you don't like and whose pages you want to remove from your future results. If you change your mind, you can unblock the site at any point," explains a help center page.Update: Google's Matt Cutts says that this could be a temporary issue. "The right people are looking at what needs to happen to re-enable this, but it might take some time." | 计算机 |
2015-48/0321/en_head.json.gz/11195 | Fedora Core 14 x86 DVD
release date:Nov. 2, 2010
The Fedora Project is a Red Hat sponsored and community-supported open source project. It is also a proving ground for new technology that may eventually make its way into Red Hat products. It is not a supported product of Red Hat, Inc.
The goal of The Fedora Project is to work with the Linux community to build a complete, general purpose operating system exclusively from free software. Development will be done in a public forum. The Red Hat engineering team will continue to participate in the building of Fedora and will invite and encourage more outside participation than was possible in Red Hat Linux. By using this more open process, The Fedora Linux project hopes to provide an operating system that uses free software development practices and is more appealing to the open source community. Fedora 14, code name 'Laughlin', is now available for download. What's new? Load and save images faster with libjpeg-turbo; Spice (Simple Protocol for Independent Computing Environments) with an enhanced remote desktop experience; support for D, a systems programming language combining the power and high performance of C and C++ with the programmer productivity of modern languages such as Ruby and Python; GNUStep, a GUI framework based of the Objective-C programming language; easy migration of Xen virtual machines to KVM virtual machines with virt-v2v...." manufacturer website
1 DVD for installation on x86 platform back to top | 计算机 |
2015-48/0321/en_head.json.gz/11196 | Andrew Hudson, Paul Hudson
Continuing with the tradition of offering the best and most comprehensive coverage of Red Hat Linux on the market, Red Hat Fedora 5 Unleashed includes new and additional material based on the latest release of Red Hat's Fedora Core Linux distribution. Incorporating an advanced approach to presenting information about Fedora, the book aims to provide the best and latest information that intermediate to advanced Linux users need to know about installation, configuration, system administration, server operations, and security.
Red Hat Fedora 5 Unleashed thoroughly covers all of Fedora's software packages, including up-to-date material on new applications, Web development, peripherals, and programming languages. It also includes updated discussion of the architecture of the Linux kernel 2.6, USB, KDE, GNOME, Broadband access issues, routing, gateways, firewalls, disk tuning, GCC, Perl, Python, printing services (CUPS), and security. Red Hat Linux Fedora 5 Unleashed is the most trusted and comprehensive guide to the latest version of Fedora Linux.
Paul Hudson is a recognized expert in open source technologies. He is a professional developer and full-time journalist for Future Publishing. His articles have appeared in Internet Works, Mac Format, PC Answers, PC Format and Linux Format, one of the most prestigious linux magazines. Paul is very passionate about the free software movement, and uses Linux exclusively at work and at home. Paul's book, Practical PHP Programming, is an industry-standard in the PHP community. manufacturer website | 计算机 |
2015-48/0321/en_head.json.gz/13876 | Edith Roman Forms E-Mail Company
The owners of Edith Roman Associates, Pearl River, NY, won't launch new company E-Post Direct until next week at the DMA's 81st Annual Conference in San Francisco, but they already have put into practice a proprietary method of enhancing lists with e-mail addresses.E-Post Direct is sending electronic invitations to visit its booth to a list formed by appending the fall show registration list with e-mail addresses. The new company, headed by president Michelle Feit, will offer all the services a client would need to launch an e-mail campaign, including e-mail list management, posting and transmission of e-mail lists, database and campaign management as well as copy, graphic and banner creation."We want to apply lessons of direct response to the Internet, because we feel like no one else has done this," Feit said. Edith Roman now manages more than half a million opt-in e-mail addresses that it claims are 100 percent deliverable. Those files will be transferred to E-Post Direct and act as a starting point to build its e-mail business. E-Post Direct, which can enhance up to 30 percent of a list with e-mail addresses, will charge $250 per thousand to enhance a mailer's list and send a first transmission. That enhancement cost of 25 cents per name compares to the $20 to $30 per name being spent by customers on their own, according to Edith Roman president Stevan Roberts.Its database and campaign management services will let clients gather information from different sources in one central place. The database will track customers' preferences such as whether they like to receive plain text or HTML messages and if they like to opt in or opt out of future mailings. With this data, messages can be populated with personal information to increase response rates.Aware of the stringent privacy protections being applied to e-mail, E-Post Direct will only send additional messages to recipients who take action by opting in to a first message and will include an opt-out option on each subsequent transmission.Roberts said e-mail is a much more profitable business model for marketers than any other application of the Internet. "E-mail makes a ton of money for clients," he said. "It reaches people where they are when they want to be reached." | 计算机 |
2015-48/0321/en_head.json.gz/15576 | Home > Risk Management
OverviewGetting StartedResearchTools & Methods Additional Materials ConsultingOur People
Consider a broad range of conditions and events that can affect the potential for success, and it becomes easier to strategically allocate limited resources where and when they are needed the most.
The SEI has been conducting research and development in various aspects of risk management for more than 20 years. Over that time span, many solutions have been developed, tested, and released into the community. In the early years, we developed and conducted Software Risk Evaluations (SREs), using the Risk Taxonomy. The tactical Continuous Risk Management (CRM) approach to managing project risk followed, which is still in use today—more than 15 years after it was released. Other applications of risk management principles have been developed, including CURE (focused on COTS usage), ATAM® (with a focus on architecture), and the cyber-security-focused OCTAVE®. In 2006, the SEI Mission Success in Complex Environments (MSCE) project was chartered to develop practical and innovative methods, tools, and techniques for measuring, assessing, and managing mission risks. At the heart of this work is the Mission Risk Diagnostic (MRD), which employs a top-down analysis of mission risk.
Mission risk analysis provides a holistic view of the risk to an interactively complex, socio-technical system. The first step in this type of risk analysis is to establish the objectives that must be achieved. The objectives define the desired outcome, or "picture of success," for a system. Next, systemic factors that have a strong influence on the outcome (i.e., whether or not the objectives will be achieved) are identified. These systemic factors, called drivers, are important because they define a small set of factors that can be used to assess a system's performance and gauge whether it is on track to achieve its key objectives. The drivers are then analyzed, which enables decision makers to gauge the overall risk to the system's mission.
The MRD has proven to be effective for establishing confidence in the characteristics of software-reliant systems across the life cycle and supply chain. The SEI has the MRD in a variety of domains, including software acquisition and development; secure software development; cybersecurity incident management; and technology portfolio management. The MRD has also been blended with other SEI products to provide unique solutions to customer needs.
Although most programs and organizations use risk management when developing and operating software-reliant systems, preventable failures continue to occur at an alarming rate. In many instances, the root causes of these preventable failures can be traced to weaknesses in the risk management practices employed by those programs and organizations. For this reason, risk management research at the SEI continues. The SEI provides a wide range of risk management solutions. Many of the older SEI methodologies are still successfully used today and can provide benefits to your programs. To reach the available documentation on the older solutions, see the additional materials.
The MSCE work on mission risk analysis—top-down, systemic analyses of risk in relation to a system's mission and objectives—is better suited to managing mission risk in complex, distributed environments. These newer solutions can be used to manage mission risk across the life cycle and supply chain, enabling decision makers to more efficiently engage in the risk management process, navigate through a broad tradeoff space (including performance, reliability, safety, and security considerations, among others), and strategically allocate their limited resources when and where they are needed the most. Finally, the SEI CERT Program is using the MRD to assess software security risk across the life cycle and supply chain. As part of this work, CERT is conducting research into risk-based measurement and analysis, where the MRD is being used to direct an organization's measurement and analysis efforts. Spotlight on Risk Management
The Monitor June 2009
New Directions in Risk: A Success-Oriented Approach (2009)
A Practical Approach for Managing Risk
A Technical Overview of Risk and Opportunity Management
A Framework for Categorizing Key Drivers of Risk
Practical Risk Management: Framework and Methods | 计算机 |
2015-48/0321/en_head.json.gz/16332 | FROM MARX TO MAO
Which Browser?It Does Make a Difference
Preliminary Advisory
All of the material contained in this Website was prepared using a Mac SE/30; as such, all of the lay-out and formating of the texts in From Marx to Mao has been (and will continue to be) constrained by the limitations of a monitor with only an 8 1/2" diagonal viewing area! Although this doesn't present any problems when viewing only textual material, it is very much a problem when it comes to presenting data in tabular form. The "schemes of reproduction" in Capital, Vol. II, for example, as well as many of Lenin's concrete studies of the development of capitalism and the process of class differentiation, especially in agriculture, make extensive use of data tables. According, if you are a "low-end user" with a relatively tiny viewing area like myself be advised that you will have to size your browser window to encompass the entire width of the screen (roughly 7 inches). Similarly, although less important, many of the documents include a table of contents which will appear as a colossal mess unless the browser is set to the maximum possible screen width. Those possessing larger monitors (the vast majority, I assume) will, of course have no problem, although the table of contents in many of the texts will undoubtedly appear to lack "a sense of proportion" if the viewing area is much wider than 7 inches.
Which Browser to Use?
All of the text in the documents was written in 12 point Geneva, all of the data tables use 9 and 10 point Geneva, and all of the files were prepared using Netscape. That should say it all. If you use Netscape 2.0 or higher, and set the "proportional font" to Geneva, you'll have no problems. (If you don't know how to do this, go to "Options," then to "General Preferences," then to "Fonts," and finally set the "proportional font" to Geneva.) But there is a little more that should said.
Notwithstanding my best efforts, I failed miserably in my goal to prepare this material for viewing on "any" browser. Off the top, you'll need a graphic browser. Marx's symbolic notation and some of his formulas in Capital, Vol. II, (as well as a page in one of Lenin's texts) could not be properly reproduced (at least by me) with current HTML standards (3.2). Although rather unaesthetic renderings were possible in many instances (using "pre-formatted text"), there were some instances where I had to use image files to properly (i.e., unambiguously and unobtrusively) present Marx's notation to the reader. Accordingly, a "graphic" brower is preferable, although not "absolutely necessary" if you're absolutely stuck with a "non-graphic" browser. If the latter is the case, I have provided footnotes that try to offer an accurate discription of what you're "not seeing" and I have included my "unaesthethic renderings" in the source code of the relevant pages of the text. Of the graphic browsers, I am only familiar with Netscape, Internet Explorer, and Mosaic (which for the last few months repeatedly crashes my machine when launched). One necessarily gains a deeper appreciation (or lack thereof) of the features of specific browers when forced to prepared documents that involve more than straight text. Trying to make allowances for the variations between different browsers (as well as those between successive versions of the same browser) is very time-consuming and often seems like a waste of time. Still, every effort has been made to prepare the various texts for viewing with either Netscape or Internet Explorer. There are, however, a few exceptions. (They are identifiable by the presence of an asterisk (*) after the title). Given the choice between Netscape and Internet Explorer, the latter proved to be too inflexible and limited (at least until I learn how better to use it?) for the demands of the texts that included tabular data and "pre-formatted text". You may find the following little table useful; it tries to summerize the strengths and weakness of the two browsers based only on my experience with them in relation to the requirements of the material in this Website using an antiquated Mac. (I have no experience with the "Wintel" platform and would appreciate any guidance from those of you who do.) Pre-
formatted
(layout, etc.)
"Refreshes"
(est.)
"noise"
require-
ments
(MB)
Type of Mac
68k Power Mac
Netscape 1.1N
where you left off
(fast)
Netscape 2.0
very good control
only the basics
at the very beginning
(slow)
Internet Explorer 3.0 (beta)
only the basics (less than
(slower)
Internet Explorer 3.01
From the above it is clear that Netscape 3.0 stands out; it is a RAM-hog, but it will handle any file in this site. Of the others, Netscape 2.0 is perhaps the best bet. It requires no more RAM than Internet Explorer and it loads (large files) two to three times faster than the latter. Its one drawback (based on my experience only) is that it slows down a little after its loaded about 350k and really struggles to load a 450k file; above that an "out of memory" message looms on the horizon. Common to both versions of Netscape is an extreme sensitivity to image "noise", those little "flecks" that pepper poor quality images. There aren't that many of these to worry about, but you will come across them. As for Internet Explorer, version 2.0 can load larger files than Netscape 2.0, with 550k appearing to be the Explorer's upper limit, but it is really slow going. The beta version of 3.0, as well as the non-beta 3.01 release, on the other hand, are defintely inferior to 2.0 in every respect that counts in terms of this Website. All of these versions of Explorer, however, do handle those poor quality images as if they were flawless gems! Big deal. Those using the smaller capacity browser will not be able to download the larger texts as a single file, but all texts in excess of 500k have also been divided into two parts, each of which is small enough for any of the above browsers to handle individually. The index of titles for each library includes the file size of each text, and all files exceeding 500k, when "clicked," will bring up a message informing you that you can load the single large file or either of the two smaller files.
Back to From Marx to Mao | 计算机 |
2015-48/0322/en_head.json.gz/961 | Posted Does Adobe Flash 11 have a future on the Web? By
Adobe has formally announced it will be shipping Adobe Flash Player 11 and Adobe Air 3 in early October. Adobe touts the new versions as a “game console for the Web,” with graphics performance up to 1,000 times faster than Flash Player 10 and Adobe Air 2, thanks to full hardware-accelerated rendering for both 2D and 3D graphics and 64-bit support on Windows, Mac OS X, and Linux. However, while Adobe Flash remains common on PCs, Apple has famously eschewed Flash on its iOS mobile platform, and even stopped shipping it on Macs (although Mac users are free to install it themselves). This week, Microsoft announced the version of Internet Explorer for its Windows 8 Metro environment won’t support browser plug-ins — and that means no Flash in the browser.
Is Adobe Flash going to fade away in the face of HTML5 and online video delivered in formats like H.264 and Google’s WebM? Or will Adobe’s advances to the platform let it remain a major player in Internet development even as it starts to disappear from people’s browsers?
What Adobe’s Bringing to Flash 11 and Air 3
The flagship development in Flash Player 11 and Air 3 is Stage 3D, a new hardware-accelerated graphics architecture for 2D and 3D rendering performance. Adobe is touting Stage 3D as capable of delivering console-quality games, animating millions of onscreen objects smoothly at 60 frames per second, even on older computers that lack modern video hardware — like “Mom’s old PC with Windows XP.” The technology doesn’t just apply to games: Stage 3D and Adobe’s hardware-accelerated architecture will also deliver improvements to video conferencing and playback of high-definition video (complete with 7.1 surround sound support).
These improvements aren’t just aimed at desktop computers, but also to Internet-savvy televisions and, of course, mobile devices including Android, BlackBerry, and—yes, Apple’s iPhone, iPad, and iPod touch. Video of what some developers are doing with Stage 3D shows the technology’s potential, especially compared to current “state of the art” Flash games.
To further enhance Flash’s appeal to game developers and content producers, Flash Player 11 and Air 3 will also support content subscriptions and rentals via Adobe Flash Access and Adobe Pass. The feature is aimed more at Internet-connected TVs so operators and content providers (say, perhaps a Netflix competitor) can offer pay-per-view and rental content, but the technology also scales to desktop and mobile platforms.
What about Apple’s iOS and Windows Metro?
So how is Adobe getting its technology onto iOS devices, where Apple has famously banned Flash? That’s where Adobe Air comes in: Adobe Air enables Flash developers to package their Flash-based projects as native applications for a variety of platforms, including Windows and Mac OS X, but also Android, BlackBerry (including the PlayBook), and Apple’s iOS. In broad terms, Adobe Air gives Flash developers a “Save as App” command.
The ability to roll up Flash projects as apps is important. The Adobe Flash plug-in might be banned from iOS’s Safari Web browser —and, apparently, from Internet Explorer in Windows 8 Metro — but developers can build for those platforms by using Adobe Air to save out their projects as standard apps. On platforms where Air is built in, like RIM’s BlackBerry PlayBook, those apps can be comparatively svelte and quick to download. Adobe says it expects Adobe Air will enable developers to build Flash-based apps for Windows 8 Metro, just like they currently can for iOS. As Web-browsing platforms drop support for Adobe’s Flash plug-in, Adobe Air is an increasingly important part of the company’s claim that its Flash technology can reach a billion people.
Apps built using Adobe Air have often looked gorgeous — many of Adobe’s primary customers are designers and media professionals, after all — and the platform has had some early successes, including mainstream apps like TweetDeck (which got acquired by Twitter), and the current top iPad game on the iTunes App Store: Machinarium. However, Adobe Air apps have also been roundly criticized for poor performance and hogging system resources. For instance, Machinarium is limited to the more-powerful iPad 2 and sticks with 2D (rather than 3D) graphics.
Flash’s value proposition
Adobe is touting Flash Player 11 (and Air 3) as the “next-generation for the Web.” The company argues more than two thirds of all Web-based games are currently powered by Flash, and Flash games have an audience more than 11 times larger than the Nintendo Wii. But this doesn’t change the fact that Flash is beginning to disappear from Web browsers: iOS doesn’t support it, Windows Metro won’t support it, and Macs don’t ship with it. Where Adobe Flash used to be a near-ubiquitous technology, the ability to deploy Flash content to Web users is increasingly shaky, and several high-profile security gaffes involving Flash haven’t helped the technology’s reputation in consumers’ eyes. In fact, yet another security patch for a Flash vulnerability in Windows, Android, Mac OS X, and Linux is due today, and it’s already being exploited on the Internet.
Nonetheless, Flash has a strong appeal to developers creating interactive content because Flash projects look the same and — kinda — act the same everywhere, regardless of platform. Although HTML5, JavaScript, and even WebGL have made significant strides in the last few years, those technologies cannot yet make the same claim: Wide variations in browsers, performance, and technology support make developing something like 3D games using open Web technologies difficult to near-impossible. Flash developers do face many platform-specific challenges—developing a game designed to work with a mouse is not the same as making a game that works with touchscreens and gestures, but Flash offers a far more uniform platform for interactive content than today’s open Web technologies. Flash dangles the possibility of — dare we say it? — a write-once, run-anywhere solution for interactive content.
Flash’s future almost certainly lies in interactive content like games, not the simple delivery of video and audio. Where Flash used to be the de facto platform for pushing video to Internet users, a study earlier this year found nearly two thirds of Web video had stepped away from Flash—that’s mostly due to the market pressure of Apple’s iOS platform, and the numbers are probably higher now.
Flash’s value contradiction
Adobe says Flash 11 is the “next-generation console for the Web,” but the simple fact is that Flash is slowly vanishing from the Web, or at least from Web browsers. It doesn’t matter if Adobe can crank up graphics performance. As a growing number of Internet users access the Web in browsers that don’t support Flash, Flash content aimed at Web browsers might as well be moldering in a cardboard box in the basement of some county courthouse. Or, perhaps worse, it might as well have been written with Java.
Native apps sidestep a ban on the Flash browser plug-in because they don’t require a plug-in, and don’t run in a browser. However, they also can’t appear embedded in Web sites, so Adobe Air isn’t a solution for Web publishers looking to embed audio, video, and (most importantly) interactive elements in their Web pages. Developing a Web site and developing an app — let alone an app targeting multiple mobile and desktop platforms — are very different things.
Despite Adobe’s focus on Web-based gaming with Flash 11 and Air 3, it seems clear Flash’s value to Web publishers is declining, even as its value to app developers might be on the rise. The question then becomes whether Flash and Adobe Air can compete with native app development tools. To date, with Flash Player 10 and Adobe Air 2, the answer is no. Perhaps Adobe can change that with Flash Player 11 and Adobe Air 3. | 计算机 |
2015-48/0322/en_head.json.gz/964 | D-Lib MagazineJuly/August 1998
Directions for Defense Digital Libraries
Ronald L. Larsen
US Defense Advanced Research Projects Agency (DARPA)
Arlington, Virginia USA
[email protected]
The role of the Department of Defense (DoD) has shifted dramatically in the 1990's. In the forty years prior to 1990, the DoD engaged in three domestic and seven foreign deployments. In contrast, the Department engaged in six domestic and 19 foreign deployments between 1990 and 1996, most of which were for peacetime activities such as humanitarian assistance and disaster relief. This represents a 16-fold increase in the rate of deployment. Whereas major deployments in the pre-1990 period occurred over substantial periods of time, allowing adequate time for preparation, the post-1990 deployments are characterized more by their rapid and transient nature. Analysts and planners face increasing demands to interpret rapidly unfolding situations and to construct alternative responses. The Department now speaks in terms of an "OODA" loop (Observe, Orient, Decide, Act) and strives to increase the rate at which this loop can be traversed, in order to be more responsive to the increasing frequency and pace of events. Much of the activity conducted within the OODA loop is itself information intensive, for placing the specific task at hand within a regional or situational context amplifies the need for a timely, accurate, and comprehensive world view. Digital library technologies are critical to addressing these information intensive, time-critical situations effectively.
The 1990's has also witnessed the explosive growth of the World Wide Web (WWW), expanding dramatically the information which is accessible and potentially usable to analysts and strategists. But identifying, acquiring, and interpreting that information which is critical to understanding a particular event or situation in a timely manner, out of the sea of data of which the WWW is composed, poses enormous problems. This is a problem of major interest to the DoD. While an exponentially growing volume of information is potentially available to one with copious time and substantial diligence, DARPA's digital library research intends to make this information accessible and usable to those confronting difficult decisions without the luxury of time. This is, in large part, the motivation behind DARPA's Information Management (IM) program (http://www.darpa.mil/ito/research/im/index.html).
DARPA's Information Management Program
The IM program envisions digital libraries within a global information infrastructure, in which individuals and organizations can efficiently and effectively identify, assemble, correlate, manipulate, and disseminate information resources towards their problem-solving ends, regardless of the medium in which the information may exist. It makes no assumptions about commonality of language or discipline between problem solver and the information space, but, instead, provides tools to navigate and manipulate a multilingual, multidisciplinary world. It does assume, however, that task context, user values, and information provenance are critical elements in the information seeking process. The IM program seeks major advances in acquiring and effectively using distributed information resources to provide the Defense analyst with a comprehensive ability to assess a rapidly changing situation. Its products are scalable, interoperable middleware:
to manage exponentially growing information resources, to focus the analyst's attention on highly relevant materials, to organize information for rapid exploitation in unpredictable circumstances, and to provide superior ability to evaluate all aspects of a given situation to inform rapid decision processes appropriately.
The accelerating pace of world events coupled with the expansion of the WWW conspire to clarify the urgency of developing adaptive technology to rapidly acquire, filter, organize, and manipulate large collections of multimedia and active digital objects in a global distributed network to provide the ability to investigate and assess time-critical, multifaceted situations. DoD's information management requirements typically push the current boundaries by two orders of magnitude in quantitative parameters such as numbers of coordinated repositories, sizes of collections, sizes of objects, and timeliness of response. In addition, qualitative improvement is required in the creation, correlation, and manipulation of information from multiple disciplines and in multiple languages.
Directions and Challenges
The DARPA IM program is narrower and more sharply defined than related federal research programs with which it collaborates (e.g., NSF's Knowledge and Distributed Intelligence [KDI] program (http://www.nsf.gov/pubs/1998/nsf9855/nsf9855.htm) and Digital Libraries Initiative,
Phase 2 [DLI-2] program) and Digital Libraries Initiative, Phase 2 [DLI-2] program (http://www.nsf.gov/pubs/1998/nsf9863/nsf9863.htm
Today's information retrieval systems rely largely on indexing the text of documents. While this can be effective in bounded domains in which the usage and definition of words is shared, performance suffers when materials from multiple disciplines are represented in the same collection, or when disparate acquisition or selection policies are active. Rather than being the exception, however, this is typically the rule (especially, on the Web). Techniques for mapping between structured vocabularies begin to address this problem for disciplines which are fortunate enough to have a formalized vocabulary (http://www.sims.berkeley.edu/research/metadata/).
Techniques are needed that can look beyond the words, however, to the meaning and the concepts being expressed. Automated techniques for collection categorization are required, and some success has been recently reported using statistical approaches on large corpora (http://www.canis.uiuc.edu/interspace/). Query languages and tools seek to identify materials in a given collection which are similar to the characteristics expressed in a given query. But these characteristics focus on the information artifact and have yet to consider non-bibliographic attributes which might serve to focus a search more tightly, such as types of individuals who have been reading specific material, the value they associated with it, and the paths they traversed to find it (http://scils.rutgers.edu/baa9709/).
The navigational metaphor has become ubiquitous for information seeking in the network environment, but highly effective and facile tools for visualizing and navigating these complex information spaces remain to be fully developed. Incorporation of concept space and semantic category maps into visualization tools is a promising improvement. Concept spaces and semantic category maps are illustrative of statistically-based techniques to automatically analyze collections, to associate vocabulary with topics, to suggest bridging terms between clusters of documents, and to portray the clustered document space in a multidimensional, navigable space, enabling both high level abstraction and drill-down to specific documents. (The previously-identified URL for the Interspace project at the University of Illinois, [http://www.canis.uiuc.edu/interspace/], provides more detail on these techniques.) Additional approaches, including alternatives to the navigational metaphor are needed.
Scalability and interoperability continue to be major challenges. The objective is to build scalable repository technology that supports the federation of thousands of repositories, presenting to the user a coherent collection consisting of millions of related items, and to do this rigorously across many disciplines. As the size and complexity of information objects increases, so also does the bandwidth required to utilize these objects. Real-time interactivity is required for the time-critical assessment of complex situations, pushing the bandwidth requirements yet higher. As this capability emerges, broadband interoperability becomes feasible, in which the user's inputs are no longer constrained to a few keystrokes, with the return channel carrying the high volume materials. Research is required to explore the nature of such broadband interoperability and the opportunities it brings to raise the effectiveness of the information user.
The analyst's attention has become the critical resource. The technological objective is to get the most out of the analyst's attention in the least amount of time by providing a powerful array of tools and automated facilities. The analyst's job, by definition, is to rapidly and effectively understand the full dimensions of an unfolding situation. Real-time correlation and manipulation of a broad array of information resources is critical to this task. Correlation of geographical information (e.g., maps and aerial imagery) with event-related materials (e.g., documents and news reports) is becoming increasingly important. The "GeoWorlds" project (http://www.isi.edu/geoworlds/) is integrating geographically-oriented digital library technology with scalable collection analysis to demonstrate this evolving approach to crisis management in a collaborative setting.
Deriving a comprehensive and accurate assessment of an international situation currently also draws heavily on the skills of translators and linguists. Translingual aids have the dual potential of enabling analysts to perform substantial filtering of multilingual information (thus relaxing their reliance on translators), while concurrently focussing the precious skills of translators on those tasks where their skills are essential. It will come as little surprise to library professionals that the IM research agenda can be broadly structured into context- or task-independent repository-based functions and user- or usage-dependent analysis activities. This is, after all, largely the way libraries have traditionally divided their activities. The DARPA IM program further decomposes each of these two areas into three tracks: Repository functions:
Registration and security provides the registration, access controls, and rights management facilities required to support Defense-related applications in an open network environment.
Classification and federation advances the capability to automate the acquisition, classification, and indexing of information resources among distributed repositories.
Distributed service assurance addresses the vital concerns of matching user interaction styles and needs to system performance capabilities. This work also pushes the boundaries of interactivity over broadband networks.
Analysis activities:
Semantic interoperability strives to extend the analyst's ability to interact with diverse information from distributed sources at the conceptual level.
Translingual interaction builds on recent successes in machine translation to provide the information user the facility for identifying and evaluating the relevance and value of foreign language materials to a particular query, without assuming the user has any proficiency in the foreign language.
Information visualization and filtering focusses on the development of improved tools for visualizing and navigating complex multidimensional information spaces, and on user-customizable value-oriented filters to rank information consistent with the context of the task being performed.
IM Program Objectives
Nine objectives quantitatively and qualitatively characterize the goals of the IM Program: 1. Advance the technologies supporting federated repositories from the present state, characterized by the Networked Computer Science Technical Reference Library (NCSTRL), in which more than a hundred independent repositories are federated using custom software, to a state where generic software is commonly available and supports thousands of distributed, federated repositories (http://www.ncstrl.org/). 2. Enlarge the effective collection capacity of a typical repository from thousands to millions of digital objects, including scalable indexing, cataloging, search, and retrieval. 3. Support digital objects as large as 100 megabytes, and as small as 100 bytes. 4. Reduce response times for interaction with digital objects to sub-second levels, striving for 100 milliseconds, where possible. High duplex bandwidth coupled with low response time provides the opportunity to explore new modes of interacting with information, referred to here as broadband interoperability, in which the traditional query could be reconceived to include a much richer user and task context. 5. Expand the user's functional capabilities to interact with networked information from the present play and display facilities of the WWW, to the correlate and manipulate requirements of a sophisticated information user engaged in network-based research and problem solving. 6. Raise the level of interoperability among users and information repositories from a high dependence on syntax, structure and word choice, to a greater involvement of semantics, context, and concepts. 7. Extend search and filtering beyond bibliographic criteria, to include contextual criteria relating to the task and the user. 8. Reduce language as a barrier to identifying and evaluating relevant information resources by providing translingual services for query and information extraction. 9. Advance the deployed base of general purpose content extraction beyond forms and tagged document structures to include extraction of summary information (e.g., topics) from semi-structured information sources. Conclusion
Perhaps one of the biggest mixed blessings confronting the Defense analyst is the reality that information resources are growing exponentially in number and size, and that they are increasingly coupled to accurate situation understanding and mission effectiveness. The analyst's attention has become the critical resource. The objective of the DARPA Information Management program is to provide the technological capability to get the most out of the analyst's attention in the least amount of time. The Information Management program strives to broadly increase the Defense analyst's ability to work with a diverse and distributed array of information resources in order to understand and develop an appropriate response to time-critical, crisis-driven situations. In short, the program envisions the rigor and organization normally associated with a research library to be virtually rendered and extended in the networked world of distributed information.
Top | Magazine
Monthly Issues
Previous Story | Next Story
| E-mail the Editor hdl:cnri.dlib/july98-larsen | 计算机 |
2015-48/0322/en_head.json.gz/1082 | 10 things to drool over in Firefox 4
Speed, simplicity, privacy, security and HTML5 support are some features that promise to make Mozilla's new browser a winner
Katherine Noyes (PC World (US online)) on 18 March, 2011 09:11
Mozilla's Firefox 4 is now officially expected to debut on Tuesday March 22, following hard on the heels of Google's Chrome 10 and Microsoft's Internet Explorer 9.
With so many new browser releases coming out in such rapid succession, it stands to reason that at least some users are going to need some help figuring out which now works best for them.
Toward that end, I had a chat earlier today with Johnathan Nightingale, Mozilla's director of Firefox development, to get a sense of what the final release of Firefox 4 will bring. Here are some of the highlights of what we can expect.
1. More Speed
With its new JägerMonkey JavaScript engine, Firefox 4 delivers huge performance enhancements, Nightingale told me, including faster startup times, graphics rendering and page loads. In fact, in performance tests on the Kraken, SunSpider and V8 benchmarks, for example, Firefox 4 blew away previous versions of the browser, with performance results between three and six times better.
Firefox 4 also outdid Chrome 10, Opera 11.1 and Internet Explorer 9 in the Kraken benchmark, as GigaOM recently noted. Bottom line: It's blazingly fast.
2. Less Clutter
Tabs are now given top visual priority in Firefox 4 so as to enable more efficient and intuitive browsing. In addition to its new "tabs on top" layout, however, the software now also offers a number of other features to make it simpler and more streamlined.
A Switch to Tab feature, for instance, helps reduce tab clutter by automatically calling up an already-tabbed URL rather than duplicating it all over again. "It took my tab list from 80 to 90 down to 50 or 60," Nightingale said.
"The slowest part of browsing is often the user," he explained. "If you have 200 tabs open and you can't find the right one, that's the slow part."
Then, too, there are App Tabs, which allow the user to take sites they always have open -- such as Gmail or Twitter -- off the tab bar and give them a permanent home in the browser. Then, no matter where the user visits, those tabs are always visible on the browser's left-hand edge. Not only that, but each App Tab's icon glows to indicate when there's been activity on that site, such as new mail coming in.
When Firefox gets reloaded, it boosts loading speed by focusing first on the active page and App Tabs, and then loading other tabs in gradual succession after that, Nightingale explained.
Further reducing clutter is Firefox 4's Firefox Button, meanwhile, which displays all menu items in a single button for easy access.
3. Panorama
Though it began as an add-on, Firefox 4's new Panorama feature is another one designed to battle tab clutter. Using it, Web surfers can drag and drop their tabs into manageable groups that can be organized, named and arranged intuitively and visually.
In previous versions of the browser, users with 20 tabs, for example, didn't have an easy way to separate out the ones that were related. "Some people would put tabs in different windows, but that just moves the clutter," Nightingale explained.
Panorama, on the other hand, now provides a visual canvas on which tabs can be logically organized into groups representing work, home, hobbies or research, for example.
4. Sync
Another new feature that started life as an add-on is Sync, which synchronizes an individual's multiple copies of Firefox across various platforms. So, a user might look up directions to a restaurant from their work computer, for example, and then be able to easily find and pull down those same directions from their Android phone on the road, Nightingale explained.
"Wherever you are, Firefox knows you," he added. "It gives you so much freedom."
For privacy, all such information is bundled on the user's local machine and encrypted before it goes onto the network, he added.
5. Do Not Track
With a single check box, Firefox 4 users can ensure that any time the browser requests a Web page, it will send along a header specifying that the user does not want their browsing behavior to be tracked.
In theory, advertisers and Web sites could disregard such requests, Nightingale noted -- as they could equivalent mechanisms in other browsers as well. On the other hand, enforcing them is not a technical problem, he noted. "It's a matter of trust -- enforcing on the technical side doesn't help."
What Nightingale hopes is that advertisers and Web sites will use the new capability as an opportunity to show respect for consumers' wishes and to demonstrate leadership when it comes to privacy. In beta versions of the software, he noted, most wanted to learn more about how to comply and get involved.
"I'm keen to see how ad networks and content sites respond," Nightingale concluded. With the new technology enabled, "everyone you're interacting with knows your intent."
6. Under the Hood
A number of other features -- some visible to users, others not -- will also appear in Firefox 4, including support for the WebM format for HD-quality video; 3D graphics via WebGL; elegant animations through the use of CSS3; and multitouch support.
Then, too, there's super-fast graphics acceleration with Direct2D and Direct3D on Windows, XRender on Linux, and OpenGL on Mac enabled by default on supported hardware.
7. Improved Security
With HTTP Strict Transport Security, or HSTS, sites can now make sure information is always encrypted, thereby preventing attackers from intercepting sensitive data. Previously, a hacker sitting in a Starbucks store, for example, could potentially watch Web surfers enter a bank's home page, which is not encrypted, and hijack them from there, Nightingale noted.
With Content Security Policy, or CSP, meanwhile, Firefox 4 ensures that cross-site scripting attacks can't infect a site such as through its comments section, he added.
I should also note that because Firefox's code is open, it's not subject to any vendor's preset patch schedule. Rather, its security is constantly being reviewed and improved.
8. HTML5
Firefox 4's new HTML5 parser and full support for Web video, audio, drag & drop, and file handling mean that it's capable of supporting the latest Web environments.
9. Multiplatform Support
Whereas Microsoft's IE9 can be used only on Windows -- and only Vista and Windows 7 at that -- Firefox, as always, is multiplatform. So, whether you're on Windows, Linux or a Mac, you can enjoy its powerful new features.
10. The Community Touch
Last but not least, whereas proprietary browsers such as IE9 are developed by Microsoft's team of paid developers to reflect their own vision of what users want, Firefox has been shaped significantly by the people who use it. In fact, between 30 percent and 40 percent of its code was developed by the community, Nightingale told me. It's hard to imagine a better way to make sure a product delivers what users want.
With so many exciting new capabilities, Firefox users have a lot to look forward to in this new release. So, for that matter, do the legions of Internet Explorer users who will sooner or later make the switch.
Follow Katherine Noyes on Twitter: @Noyesk.
Tags open sourceapplicationsGooglebrowser securityMicrosoftbrowserssoftwareinternetmozillaFirefox | 计算机 |
2015-48/0322/en_head.json.gz/1287 | LinuxInsider > Enterprise | Next Article in Enterprise
The New FOSS Frontier: The Database Market
By Ed Boyajian and Larry Alston
Linux and open source middleware JBoss has made its mark in the enterprise, and it is just a matter of time before open source becomes mainstream in other functional parts of the IT infrastructure as well. Where exactly that will happen, however, is the interesting question.
With most companies spending 10 to 20 percent of their revenue on enterprise software, many IT managers would love to see more enterprise-class open source options. However, IT architects and project managers of IT tend to be cautious -- the back office has a low tolerance for risk, which makes it difficult for projects to gain entry into that exclusive back office.
To consider the future of open source in the enterprise, one must consider the past and what it took to make Linux a success. Linux market share continues to grow, and it is considered safe and reliable by even the most risk-averse, but it did not get there via the route traveled by commercially licensed products. The same can be said for Jboss.
In our capitalist society, it has traditionally been financial opportunity that has spurred the development of a product; logic might dictate that the most expensive component would be the first to see competition from an open source product with a lower cost of entry. Open source is different, however, because those that make it happen are not the ones that make an immediate profit from the project. Certainly that is true of Linux, and therefore cost will not necessarily determine the next component to see a viable open source alternative.
To Elaborate
While many people will say that cost savings due to the lack of licensing fees is the primary reason for deploying Linux, that is not the reason why the original developers created Linux or why Linux earned its way out of the early-adopters arena and into mainstream usage. For that, there were two key drivers:
Unity against vendor lock-in: Frustration with the dominance of Windows and the costs and limitation associated with it motivated and continues to motivate many developers to contribute their time and energy to make Linux what it is today.
A large, diverse and thriving community: It requires a massive number of person-hours to produce a product as large and as complex as an operating system, and without a large community, Linux never would have been viable. More importantly, Linux never would have been perceived as a safe, vendor-independent choice if there were not a large number of diverse developers willing to vouch for its stability and functionality.
The same will be true of the next component in the enterprise to have a viable open source option. When looking at the myriad of software in a typical enterprise -- front office, back office and middleware -- there are many expensive components, and there is pull from the market for open source alternatives. However, that is not likely going to be an accurate predictor of which will be available next. For that, let's look at where there is threat of vendor lock-in and an established community.
Look to the Database
If one were to guess on the next part of the enterprise that will be embraced by open source, the database would be a good bet. Consider the first of the two drivers that made Linux a success: Oracle is the perceived gorilla in the market, and with Oracle's acquisition of MySQL (through its pending acquisition of Sun Microsystems), its dominance is even more threatening to developers.
Secondly, consider the community behind Postgres. Postgres (also known as "PostgreSQL") is a full-feature, enterprise-class database that is giving Oracle a run for its money. With over 200 developers contributing to the code base and over 20,000 downloads per week, it qualifies as a very large and diverse community. With databases being a part of almost every application and infrastructure architecture, the community will continue to grow.
One might argue that MySQL (ignoring the pending acquisition for a moment) is already mainstream, that the database has been commoditized, and there is a viable open source alternative to the dominant vendor. Not so. MySQL never challenged a major enterprise DBMS -- nor did it try to. The success of MySQL stems from the fact that it filled a market need that was largely being ignored by commercial vendors. The Motivation
The low-end, low-cost database market had no incumbent, and MySQL quickly filled the void giving developers a quick and easy tool for quickly creating Web-based applications that were easy to deploy and to administer. MySQL is developer-friendly and is geared for programmers who typically build client-rich applications using Ajax, PHP or Perl.
Unlike corporate developers, these programmers are not interested in enterprise features like scalability, concurrency, manageability or full SQL compliance. Developers that needed a full-featured, scalable DBMS still turned to the proprietary vendor solutions -- most likely the big three: Oracle, IBM and Microsoft.
With Oracle dominating the commercial DBMS market, there is ample motivation for a community to create a challenger. Postgres has the breadth and depth of features to rival Oracle, and with commercial vendors (including EnterpriseDB) offering services, support, and the all-important one throat to choke, the database market is poised to be commoditized. Good News
Another indication that the database market is ripe for commoditization is that specialized, open source database management systems are appearing on the horizon to address niche markets. Derby (pure Java) and Hadoop (for data-intensive, distributed apps), for example, are gaining traction for unique applications. With a viable product available, a thriving community in place, and a market ready for commoditization, it is a safe bet that the database will be the next component in the enterprise to embrace open source, and it will likely see the success shared by Linux and JBoss. This is good news for all enterprise architects and project managers who have applications to build and a budget to balance.
Ed Boyajian is president and chief executive officer at EnterpriseDB. Larry Alston is vice president of product management at EnterpriseDB.
More by Ed Boyajian and Larry Alston | 计算机 |
2015-48/0322/en_head.json.gz/3194 | Startup Kit 5 min read
Connect Internationally With Your e-Business
Entrepreneur Press and Rich Mintzer
Startup Kit The 10 Most Deadly Mistakes in Website Design
Startup Kit Top 5 Reasons e-Businesses Fail
Startup Kit Accepting Credit Cards and PayPal on Your eCommerce Site
This excerpt is part of Entrepreneur.com's Second-Quarter Startup Kit which explores the fundamentals of starting up in a wide range of industries.
In Start Your Own e-Business, the staff at Entrepreneur Press and writer Rich Mintzer explain how to build a dotcom business that will succeed. In this book, you'll find recipes for success, road maps that pinpoint the hazards, and dozens of interviews with dotcom entrepreneurs who've proved they’ve got what it takes to survive in this sometimes fickle marketplace. In this edited excerpt, the authors offer some quick tips for doing business with international customers on your ecommerce site. One of the lures of the web is that once your site is up, you're open for business around the world 24 hours a day. Some entrepreneurs may not see very many, or any, international orders. For other entrepreneurs, however, international sales make up a significant portion of their business.
Step one in getting more global business is to make your site as friendly as possible to foreign customers. Does this mean you need to offer the site in multiple languages? Small sites can usually get away with using English only and still be able to prosper abroad.
Consider this: Search for homes for sale on Greek islands, and you’ll find as many sites in English as in Greek. Why English? Because it’s an international language. A merchant in Athens will probably know English because it lets him talk with French, German, Dutch, Turkish and Italian customers. An English-only website will find fluent readers in many nations. (But keep the English on your site as simple and as traditional as possible. The latest slang may not have made its way to English speakers in Istanbul or Tokyo.)
Still, it’s important to recognize that more than 65 percent of web users speak a language other than English. Providing the means of translating your English content to another language will go a long way toward building good customer relationships with people outside the United States. Your multilingual website becomes more accessible and popular if users can translate your website content into their native language.
One way to make this possible is to provide one of the free web-based language translation tools offered by AltaVista Babelfish or Google Translate. Both work similarly. For Babelfish, all you have to do is cut and paste some simple code, and web audiences who speak Chinese (traditional and simplified), Dutch, English, French, German, Greek, Italian, Japanese, Korean, Portuguese, Russian or Spanish will be able to translate websites into their native language with one click.
At the very least, you should make your site friendlier to customers abroad by creating a page--clearly marked--filled with tips especially for them plus photos and simple pictures for directions. If you have the budget, get this one page translated into various key languages. A local college student might do a one-page translation for around $20.
In the meantime, routinely scan your log files in a hunt for any patterns of international activity. If you notice that, say, Norway is producing a stream of visitors and no orders, that may prompt you to search for ways to coax Norwegians into buying. Try including a daily special for this demographic group and include some industry news that relates to Norway. Therefore, if you're in the fashion industry, have an article on your site about Norway’s latest fashion trends.
Clues about international visitors will also help you select places to advertise your site. While an ad campaign on Yahoo! may be beyond your budget, it’s entirely realistic to explore, say, ads on Yahoo! Sweden. If you notice an increase in visitors (or buyers) from a specific country, explore the cost of mounting a marketing campaign that explicitly targets them.
At the end of the day, whether or not you reap substantial foreign orders is up to you. If you want them, they can be grabbed, because the promise of the web is true in the sense that it wipes out time zones, borders, and other barriers to commerce. That doesn’t mean these transactions are easy--they can be challenging, as you’ve seen--but for the etailer determined to sell globally, there's no better tool than the web.
Here are some quick tips for conducting international business:
1. Check all legal issues. You never know what you can and can't transport legally these days. Before even venturing into international waters, make sure whatever you're selling can be transported into various countries without paying extra fees or landing in jail.
2. Let international business find you. Unless you have a very specific product that you know will work well in certain countries (or one country), it’s hard to go after an international market. Most international commerce for small businesses is the result of inquiries from abroad.
3. Hire a customer service rep that is bi-lingual, at least in English and Spanish. You might also show your customer service reps how to use translation sites.
4. Make sure at the end of the process you'll still come out ahead. Once you factor in the shipping and insurance factors, you’ll need to know that you're still making a profit. With that in mind, you're better off focusing on international B2B sales rather than selling to individual customers unless you're selling products with a decent price tag and a high markup. For instance, if you sell handbags for $500 that cost you only $250, you’ll likely come out ahead on an individual customer purchase. But selling $30 shirts is probably not worthwhile unless you're selling 100 of them to a store or an overseas vendor.
Building Your Online Presence | 计算机 |
2015-48/0322/en_head.json.gz/4100 | Office 365 Office 365: Ready for your business
Henry Ford once said that “before anything else, getting ready is the secret of success.” While he likely was talking about building the Model T, the same can be said of software. To be successful, the software you develop must be ready for your customers’ business.
But what exactly is “enterprise-ready?” In a nutshell, it means that the solution was designed, tested, and implemented with businesses and their business needs in mind. Microsoft Office 365 does exactly that. It takes the well-known and frequently used Microsoft Office productivity tools many organizations rely on-such as Word, Excel, and PowerPoint-and delivers them as cloud-based applications. Office 365 is composed of cloud versions of Microsoft communication and collaboration services-including Exchange Online, SharePoint Online, and Lync Online. The result is an enterprise-ready set of productivity tools that make it possible for businesses to do more with less.
Unlike Google, which builds products for consumers, hoping businesses will adapt, Office 365 was designed right from the beginning with enterprise-class capabilities in mind. It’s the result of more than 20 years working hand-in-hand with businesses of all sizes to meet their evolving needs. The enterprise-level features you expect
Some companies are lured by Google Apps’ promises of affordability only to discover that there are hidden costs. Over time, they’re disappointed that Google Apps doesn’t deliver the enterprise-ready features they’ve come to expect. Take Atominx, for example. The web design company initially adopted Gmail, but soon found that Google’s email service didn’t meet its needs. “I adopted the basic Google Mail service because it was easy and it was free,” says Myles Kaye, Director at Atominx. “But as we hired more people, developed new offerings, and served more customers, Google didn’t keep pace with our collaboration needs and business goals.”
In the end, the company switched to Office 365. “As soon as I started to use Office 365, I could tell it was a solid, complete solution,” says Kaye. “Google couldn’t support our growth, but Office 365 helps us give customers the right impression-that we are business professionals, equipped and ready to meet their needs.”
Office 365: Advanced, professional, business-oriented
Likewise, the outsourcing company HSS began testing Google Apps because “it sounded very attractive.” But the company eventually learned that Google Apps was missing many of the features it needed. “We were used to Office … We expected the same level of functionality from Google, but features that we used every day were not available,” says Marina Johnson, Chief Information Officer for HSS. “We didn’t realize how much we would miss Office features until we didn’t have them.” HSS gave up on Google Apps and implemented Office 365. “As soon as I began working with Office 365, I realized that it’s an advanced, professional, and business-oriented application,” says Johnson. “It was like coming home. Everything we needed was there.”
The large Australian automotive retailer A.P. Eagers also tested Gmail but discovered there were trouble spots. Email formatting got corrupted; Gmail didn’t operate well with the company’s Outlook email client. A.P. Eagers instead implemented Office 365 after testing Exchange Online against the same set of criteria and finding that it performed well. “Google Mail appears less expensive at first, but once you add in partner fees and the labor required to get it running, it isn’t as cheap as it appears,” says Shane Pearce, Manager of Information Services for A.P. Eagers. “Office 365 proved to be much more cost-effective for us … Office 365 really is a powerful set of business productivity solutions.”
Choosing an enterprise-grade productivity solution is key to the success of your business. In the end, it adds up to higher productivity, lower costs, and a more competitive business. To learn more, please see our “Top 10 Reasons Why Enterprises Choose Office 365” white paper. Tags
Snip makes you even more productive using Microsoft Office
dvan1901 Curious; you say Google "builds products for the consumer and hope businesses will adapt"…how does Microsoft explain the fact that you have now adopted the Outlook.com interface for Office 365 (which was a consumer interface), you have rolled SkyDrive into Office 365 (which was a consumer product) and also Skype is part of Office 365 (a consumer product). It appears that Microsoft is simply validating what Google has been doing for years.
Al @Alina I am one of the biggest fans of Office 365 and I come to this blog to read about the goodness of Office 365. This article purpose looks to be a direct bashing of Google. I’m not interested in Google personally and am a little surprised that you are.
Can we talk about Office 365 here and let Google talk about Google somewhere else? Or do you suggest I start reading elsewhere to learn Office 365, like over at Google perhaps?
Alina Fu Microsoft.com User"> @Al Thank you for your readership of this blog and for your comments. This blog post is not about "bashing" other solutions but about providing different perspectives that may be useful or considered when evaluating cloud services. Furthermore, the customers referenced are just sharing their experiences of how they made their Office 365 decision. | 计算机 |
2015-48/0322/en_head.json.gz/4284 | Jonathan Bowen Jonathan Bowen
About Us Our Authors Careers with Packt Contact Packt Jonathan Bowen Do you want to write for packt?
Jonathan Bowen is an E-commerce and Retail Systems Consultant and has worked in and around the retail industry for the past 20 years. His early career was in retail operations, then in the late 1990s he switched to the back office and has been integrating and implementing retail systems ever since.
Since 2006, he has worked for one of the UK’s largest e-commerce platform vendors as Head of Projects and, later, Head of Product Strategy. In that time he has worked on over 30 major e-commerce implementations.
Outside of work, Jonathan, like many parents, has a busy schedule of sporting events, music lessons, and parties to take his kids to, and any downtime is often spent catching up with the latest tech news or trying to record electronic music in his home studio.
You can get in touch with Jonathan at his website: www.learnintegration.com.
Books from Jonathan Bowen
Getting Started with Talend Open Studio for Data Integration $ 26.99 This is the complete course for anybody who wants to get to grips with Talend Open Studio for Data Integration. From the basics of transferring data to complex integration processes, it will give you a head start.
Back to Authors | 计算机 |
2015-48/0322/en_head.json.gz/4583 | English Deutsch Search
About SAP SE
Press & Analyst Relations
SAP Names Mark White General Manager of Global Public Services and Healthcare Industries
| SAP - Corporate
SAP NEWSBYTE - SAP AG (NYSE: SAP) today announced the appointment of Mark White as general manager of the global public services and healthcare industries. White will be responsible for SAP’s public services and healthcare footprint, which covers government, postal, defense and security, healthcare and education customers globally. White will report directly to Simon Paris, who recently took on the newly established role of global head of strategic industries, covering financial services industries, retail and public services. The strategic industries team will provide customers with a streamlined customer engagement model and intensify collaboration with customers and partners. “Mark brings the experience needed to grow a key business for SAP,” said Paris. “Just as IT has moved to the heart of public service and healthcare delivery, these industries — always strategic to us — are now at the heart of SAP’s brand and agenda. Mark is an experienced, proven professional who knows how to motivate a team and get results. His collaborative, straightforward approach is exactly what we need to achieve our ambitious public services and healthcare goals.” White will lead a team dedicated to addressing today’s complex and vitally important public service and healthcare issues — from the challenges associated with the accelerated pace of urbanization to improving people’s access to education and enhancing access to healthcare — all amid public sector budget pressures and limited resources. The public services and healthcare team will continue to focus on offering the right mix of cloud, hosted and on-premise technology that customers can use to improve their efficiency and the quality of life for the citizens, students and patients they serve.
White began his career with SAP in 2002 as chief financial officer (CFO) for the Americas region. He helped establish the SAP National Security Services (NS2) division and headed the company’s political action committee (PAC). In his previous role as CFO for the Global Customer Operactions (GCO), White oversaw strategic financial activities including such areas as legal, real estate and pricing and contracts. Prior to joining SAP, White served as the senior vice president and corporate controller at Lucent Technologies. He has held senior management roles in the IT industry since 1980, including positions with Cadence Design Systems and Unisys. White received a master’s degree in business administration from Miami University in Oxford, Ohio, and a bachelor's degree in business from the University of Cincinnati.
Mat Small, +1 (510) 684-3552, [email protected], PDT
RSS Feed Closing this window will discard any information entered and return you back to the main page
Quick Links Analyst Relations
SAP Boards
SAP Newsroom
Stock and ADR
Investor Hotline
Copyright / Trademark | 计算机 |
2015-48/0322/en_head.json.gz/4696 | release date:May 24, 2011
The Fedora Project is an openly-developed project designed by Red Hat, open for general participation, led by a meritocracy, following a set of project objectives. The goal of The Fedora Project is to work with the Linux community to build a complete, general purpose operating system exclusively from open source software. Development will be done in a public forum. The project will produce time-based releases of Fedora about 2-3 times a year, with a public release schedule. The Red Hat engineering team will continue to participate in building Fedora and will invite and encourage more outside participation than in past releases. Fedora 15, a new version of one of the leading and most widely used Linux distributions on the market, has been released. Some of the many new features include support for Btrfs file system, Indic typing booster, redesigned SELinux troubleshooter, better power management, LibreOffice productivity suite, and, of course, the brand-new GNOME 3 desktop: "GNOME 3 is the next generation of GNOME with a brand new user interface. It provides a completely new and modern desktop that has been designed for today's users and technologies. Fedora 15 is the first major distribution to include GNOME 3 by default. GNOME 3 is being developed with extensive upstream participation from Red Hat developers and Fedora volunteers, and GNOME 3 is tightly integrated in Fedora 15." manufacturer website
1 dvd for installation on an 86_64 platform back to top | 计算机 |
2015-48/0322/en_head.json.gz/5721 | Critical de-anonymization 0-days found in Tails
Posted on 22 July 2014.
Tails, the security-focused Debian-based Linux distribution favoured by Edward Snowden, journalists and privacy-minded users around the world, sports a number of critical vulnerabilities that can lead to the user's identity to be discovered by attackers.
The claim has been made by researchers with vulnerability and exploit research company Exodus Intelligence, who are scheduled to give a talk about it at the Black Hat hacking conference next month.
According to The Register, the company will not (as usual for them) share the details about the vulnerabilities with their clients, but will work with Tails developers to fix it. The company also plans to release some details in a series of blog posts scheduled for next week.
Tails is backed by the Tor Project, as well as by Debian Project, Mozilla, and the Freedom of the Press Foundation, and is considered to be a must-have tool for anyone who wishes to remain anonymous while being and doing stuff online, and to circumvent censorship. The system is designed to be booted as a live DVD or live USB, and comes with a myriad of privacy and anonymity software.
While making Tails extremely helpful, the inter-locking of such various components also makes it difficult to spot security weaknesses.
The release candidate for the upcoming Tails version 1.1 has been made available last week, and the final version is set to be released on Wednesday. Unfortunately, the researchers claim, their "multiple RCE/de-anonymization zero-days" are still present.
Follow @zeljkazorz
0-day Linux privacy Email Address Spotlight | 计算机 |
2015-48/0322/en_head.json.gz/5959 | SimCity players to get free game from EA
EA is offering a free PC game to SimCity players as an apology for the game's bungled launch.
SimCity's launch has been a mess, nay, a disaster. With server problems plaguing the always-online simulation game, many fans are rightfully pissed.Well, EA is trying to offer an olive branch following the game's troubled launch. No, you won't be able to get a refund. Instead, EA is offering a free game.Maxis GM Lucy Bradshaw offered the closest thing to an apology from EA so far, admitting that not foreseeing the game's demand was "dumb" and that "we feel bad about what happened." See? Almost an apology! Stating the obvious, Bradshaw added that "if you can't get a stable connection, you're NOT having a good experience."According to Bradshaw, SimCity players who have activated their game will receive an email describing how to get a free PC download game from EA's catalog on March 18th. There's no detail on what games you'll be able to choose from, but at least it's better than nothing. "I know that's a little contrived," she said. "Kind of like buying a present for a friend after you did something crummy." Chatty
wytefang
Dumbest post ever.
wfalcon
Apparently he puts the same amount of consideration and time into formulating his opinions as well. kwperley
If so, they'll give free copies of the sims 3, that way they can make a few hundred on DLC from a gift. Visit Chatty to Join The Conversation
SimCity Series | 计算机 |
2015-48/0322/en_head.json.gz/6462 | A.E.Brain
Intermittent postings from Canberra, Australia on Software Development, Space, Politics, and Interesting URLs.And of course, Brains...
Coca-Cola written backwards
Thanks to Little Green Footballs (LGF to it's friends - and enemies), here's a site that's a sight. I feel slightly ashamed being so amused by it, though I guess that's human nature. After all, in the 18th century, guided tours of the Lunatic Asylum at Bedlam were conducted. The people involved are probably more dangerous than most of the Criminally Insane too.
A quick traipse around the site reveals it's no hoax, parody or spoof. A lot of work went into this. It's even quite useful, it contains a canonical copy of Henry Ford's loathsome anti-semitic tracts that appeared in the Dearborne Independent.
It's Islamic - or rather, Islamist. But Wahhabi only, thanks very much. To quote one page : Shiaism (The Rafidah) and Islam are indeed different religions. This sect has developed into what we now know as the Shia whose beliefs and thoughts are repugnant beyond belief.As for the Druze : When the opportunity arises, when they become stronger and find supporters among the ruling classes, they show their true colours and proclaim their real beliefs and aims, and they start to promote evil and corruption, and try to destroy religious teachings, sound beliefs and morals. The website's mission : This website is regularly updated by to reflect the current happenings in the Ummah. It covers a wide variety of topics from clearing misconceptions about Islaam to details on Deviant sects in the Religion.Peculiarly enough, it's based in Hindu India - which rather makes some of the articles about how oppressed Islam is in India a bit of an oxymoron.
But in elucidating the philosophies of Hammas (with their fantasies about the conspiracies of the Lions Club and Rotarians) and other radical Islamist organisations (not always approvingly), it's full of things queer as a Clockwork Orange, and as psychotic as Coca-Cola written backwards. And (pardon the pun) just as Alarming and Amusing.
3/10/2004 08:25:00 p.m. | 计算机 |
2015-48/0322/en_head.json.gz/7394 | Banner Saga Devs Say Apple Is Frustrated With Free-To-Play Games By William Usher 1 year ago comments You know things are getting bad when the company that runs an app store believes that the majority of free-to-play titles flooding into the store are bad for business. Well, that's according to Stoic Games, the developers behind the highly acclaimed role-playing game The Banner Saga.
GamesIndustry.biz managed to scuffle together a few words from an interview Polygon conducted with Stoic Game co-founders John Watson and Arnie Jorgensen, where they expressed their disappointment with the over-saturation of free-to-play titles and they noted that Apple, the famed company from out of Cupertino, California that makes iPhones and Macs, also feels disappointed with the free-to-play surge. The team is currently working on an iPad version of The Banner Saga, and they're aiming for something big... something bold... something that's not free-to-play. Watson stated that... "Apple is frustrated, along with everybody else, about the mentality that's gone rampant in mobile app markets, where people don't want to pay anything," It's not just that people don't want to pay anything, it's that the market � from publishers � has persuaded the average person that they shouldn't pay anything. Between the app store being flooded with low-effort, free-to-play cash-shop grabs and content-gouging titles like Dungeon Keeper, it's no surprise that people would assume that this is the way the app store is supposed to function.
Watson does go on to make a good point about that, saying... "They think that four dollars is an exorbitant amount to pay for a game, which is very illogical considering most people's lifestyles. They'll spend $600 on an iPad, and $4 on a coffee, drop $20 on lunch, but when it comes to spending four or five dollars on a game, it's this life-altering decision. I'm frustrated with that too."
It's not so much that those in the lifestyle feel that games are too much, it's probably that they don't regularly play games (so why would they pay for them?) or that their kids are the ones playing the game. Remember, it wasn't too long ago that the consumer advocacy groups had to step in because it was kids charging up their parent's credit card for in-app purchases on free-to-play games, not the parents. In fact, there's a recent story on Reuters about Google facing class-action lawsuits over kids racking up massive amounts of expenses using their parent's mobile device(s), as reported by Reuters.
The other problem is that the gamers putting the most money into the mobile market are �whales�. Small pockets of consumers who spend massive amounts of money on games, as noted on Forbes. One of Apple's directives for changing this trend is to get higher quality apps onto the store. According to Stoic Games co-founder Arnie Jorgensen... "So they're telling us to go higher-end with our game," ... "We're still making those decisions."
Jorgensen states that Apple encouraged developers to �push it� in terms of performance and presentation. The major problem is that the app market is already over saturated with lots of low-effort, low-tier games that are designed as cash grabs. It's going to be hard to fight that trend now that every major publisher is already cashing in on the trend. Tweet
Heroes Of The Storm's New Map Tower Of Doom Has Arrived
Planetside 2 Is Adding Base-Building | 计算机 |
2015-48/0322/en_head.json.gz/7454 | Report: Retail Office 2013 Software Can Only Be Installed on a Single PC for Life
47 comment(s) - last by Xplorer4x4.. on Feb 19 at 7:09 PM
Microsoft makes Office 2013 licensing much more restrictive
Microsoft has certainly made its share of strange moves over the years when it comes to software licensing. However, the company has again raised the ire of its customers with a change in retail licensing agreement for Office 2013. Microsoft confirmed this week that Office 2013 will be permanently tied to the first computer on which it is installed.
Not only does that mean you will be unable to uninstall the software on your computer and reinstall on a new computer, it also means if you computer crashes and is unrecoverable you'll be buying a new license for Windows 2013.
This move is a change from past licensing agreements with older versions of Office, and many believe that this move is a way for Microsoft to push consumers to its subscription Office plans.
"That's a substantial shift in Microsoft licensing," said Daryl Ullman, co-founder and managing director of the Emerset Consulting Group, which specializes in helping companies negotiate software licensing deals. "Let's be frank. This is not in the consumer's best interest. They're paying more than before, because they're not getting the same benefits as before."
Prior to Office 2013, Microsoft's end-user license agreement for retail copies of Office allowed the owner to reassign the license to a different device any number of times as long as that reassignment didn't happen more than once every 90 days. The Office 2013 EULA changes past verbiage stating, "Our software license is permanently assigned to the licensed computer."
When Computer World asked Microsoft if customers can move Word and its license to replacement PC if the original PC was lost, stolen, or destroyed Microsoft only replied "no comment."
Source: Computer World Comments Threshold -1
RE: Just like every other form of DRM...
I disagree. Pirates are NOT interested in using Office. Of course you may have some, but it's not a significant number. Open Office does the trick if for some reason....A pirate need to do a spreadsheet.I think your stretching it way beyond realistic rational. Parent
Office 365 Launches Today for $100/Year | 计算机 |
2015-48/0322/en_head.json.gz/7486 | Comparing an Integer With a Floating-Point Number, Part 1: Strategy
We have two numbers, one integer and one floating-point, and we want to compare them.
Last week, I started discussing the problem of comparing two numbers, each of which might be integer or floating-point. I pointed out that integers are easy to compare with each other, but a program that compares two floating-point numbers must take NaN (Not a Number) into account.
More >>Reports Strategy: The Hybrid Enterprise Data Center Research: State of the IT Service Desk More >>Webcasts Intrusion Prevention Systems: What to Look for in a Solution Agile Desktop Infrastructures: You CAN Have It All More >>
That discussion omitted the case in which one number is an integer and the other is floating-point. As before, we must decide how to handle NaN; presumably, we shall make this decision in a way that is consistent with what we did for pure floating-point values.
Aside from dealing with NaN, the basic problem is easy to state: We have two numbers, one integer and one floating-point, and we want to compare them. For convenience, we'll refer to the integer as N and the floating-point number as X. Then there are three possibilities:
N < X.
X < N.
Neither of the above.
It's easy to write the comparisons N < X and X < N directly as C++ expressions. However, the definition of these comparisons is that N gets converted to floating-point and the comparison is done in floating-point. This language-defined comparison works only when converting N to floating-point yields an accurate result. On every computer I have ever encountered, such conversions fail whenever the "fraction" part of the floating-point number — that is, the part that is neither the sign nor the exponent — does not have enough capacity to contain the integer. In that case, one or more of the integer's low-order bits will be rounded or discarded in order to make it fit.
To make this discussion concrete, consider the floating-point format usually used for the float type these days. The fraction in this format has 24 significant bits, which means that N can be converted to floating-point only when |N| < 224. For larger integers, the conversion will lose one or more bits. So, for example, 224 and 224+1 might convert to the same floating-point number, or perhaps 224+1 and 224+2 might do so, depending on how the machine handles rounding. Either of these possibilities implies that there are values of N and X such that N == X, N+1 == X, and (of course) N < N+1. Such behavior clearly violates the conditions for C++ comparison operators.
In general, there will be a number — let's call it B for big — such that integers with absolute value greater than B cannot always be represented exactly as floating-point numbers. This number will usually be 2k, where k is the number of bits in a floating-point fraction. I claim that "greater" is correct rather than "greater than or equal" because even though the actual value 2k doesn't quite fit in k bits, it can still be accurately represented by setting the exponent so that the low-order bit of the fraction represents 2 rather than 1. So, for example, a 24-bit fraction can represent 224 exactly but cannot represent 224+1, and therefore we will say that B is 224 on such an implementation.
With this observation, we can say that we are safe in converting a positive integer N to floating-point unless N > B. Moreover, on implementations in which floating-point numbers have more bits in their fraction than integers have (excluding the sign bit), N > B will always be false, because there is no way to generate an integer larger than B on such an implementation.
Returning to our original problem of comparing X with N, we see that the problems arise only when N > B. In that case we cannot convert N to floating-point successfully. What can we do? The key observation is that if X is large enough that it might possibly be larger than N, the low-order bit of X must represent a power of two greater than 1. In other words, if X > B, then X must be an integer. Of course, it might be such a large integer that it is not possible to represent it in integer format; but nevertheless, the mathematical value of X is an integer.
This final observation leads us to a strategy:
If N < B, then we can safely convert N to floating-point for comparison with X; this conversion will be exact.
Otherwise, if X is larger than the largest possible integer (of the type of N), then X must be larger than N.
Otherwise, X > B, and therefore X can be represented exactly as an integer of the type of N. Therefore, we can convert X to integer and compare X and N as integers.
I noted at the beginning of this article that we still need to do something about NaN. In addition, we need to handle negative numbers: If X and N have opposite signs, we do not need to compare them further; and if they are both negative, we have to take that fact into account in our comparison. There is also the problem of determining the value of B.
However, none of these problems is particularly difficult once we have the strategy figured out. Accordingly, I'll leave the rest of the problem as an exercise, and go over the whole solution next week.
Google's Data Processing Model Hardens UpDid Barcode Reading Just Get Interesting?A Datacenter Operating System For Data DevelopersSencha Licks Android 5.0 Lollipop, And LikesMore News» Commentary
Java Plumbr Unlocks ThreadsAbstractions For Binary Search, Part 10: Putting It All TogetherJetBrains Upsource 1.0 Final ReleaseDevart dbForge Studio For MySQL With Phrase CompletionMore Commentary» Slideshow
Jolt Awards 2015: Coding Tools2014 Developer Salary SurveyC++ Reading ListJolt Awards 2013: The Best Programmer LibrariesMore Slideshows» Video
The Purpose of HackathonsVerizon App Challenge WinnersFirst-Class Functions in Java 8Master the Mainframe World ChampionshipMore Videos» Most Popular
State Machine Design in C++A Lightweight Logger for C++Jolt Awards 2015: Coding ToolsBuilding Scalable Web Architecture and Distributed SystemsMore Popular» INFO-LINK
Agile Desktop Infrastructures: You CAN Have It All Mobile Content Management: What You Really Need to Know Client Windows Migration: Expert Tips for Application Readiness New Technologies to Optimize Mobile Financial Services IT and LOB Win When Your Business Adopts Flexible Social Cloud Collaboration Tools More Webcasts>>
Hard Truths about Cloud Differences Return of the Silos State of Cloud 2011: Time for Process Maturation SaaS and E-Discovery: Navigating Complex Waters Research: State of the IT Service Desk More >> | 计算机 |
2015-48/0322/en_head.json.gz/7487 | ZeroMQ: The Design of Messaging Middleware
By Martin Sústrik, February 17, 2014
A look at how one of the most popular messaging layers was designed and implemented
ØMQ is a messaging system, or "message-oriented middleware" if you will. It is used in environments as diverse as financial services, game development, embedded systems, academic research, and aerospace.
Messaging systems work basically as instant messaging for applications. An application decides to communicate an event to another application (or multiple applications), it assembles the data to be sent, hits the "send" button, and the messaging system takes care of the rest. Unlike instant messaging, though, messaging systems have no GUI and assume no human beings at the endpoints capable of intelligent intervention when something goes wrong. Messaging systems thus have to be both fault-tolerant and much faster than common instant messaging.
ØMQ was originally conceived as an ultra-fast messaging system for stock trading and so the focus was on extreme optimization. The first year of the project was spent devising benchmarking methodology and trying to define an architecture that was as efficient as possible.
Later on, approximately in the second year of development, the focus shifted to providing a generic system for building distributed applications and supporting arbitrary messaging patterns, various transport mechanisms, arbitrary language bindings, etc.
During the third year, the focus was mainly on improving usability and flattening the learning curve. We adopted the BSD Sockets API, tried to clean up the semantics of individual messaging patterns, and so on.
This article will give insight into how the three goals above translated into the internal architecture of ØMQ, and provide some tips for those who are struggling with the same problems.
Since its third year, ØMQ has outgrown its codebase; there is an initiative to standardize the wire protocols it uses, and an experimental implementation of a ØMQ-like messaging system inside the Linux kernel, etc. These topics are not covered here. However, you can check online resources for further details.
Application vs. Library
ØMQ is a library, not a messaging server. It took us several years working on the AMQP protocol, a financial industry attempt to standardize the wire protocol for business messaging writing a reference implementation for it and participating in several large-scale projects heavily based on messaging technology to realize that there's something wrong with the classic client/server model of a smart messaging server (broker) and dumb messaging clients.
Our primary concern was with the performance: If there's a server in the middle, each message has to pass the network twice (from the sender to the broker and from the broker to the receiver) inducing a penalty in terms of both latency and throughput. Moreover, if all the messages are passed through the broker, at some point, the server is bound to become the bottleneck.
A secondary concern was related to large-scale deployments: when the deployment crosses organizational boundaries the concept of a central authority managing the whole message flow doesn't apply anymore. No company is willing to cede control to a server in a different company due to trade secrets and legal liability. The result in practice is that there's one messaging server per company, with hand-written bridges to connect it to messaging systems in other companies. The whole ecosystem is thus heavily fragmented, and maintaining a large number of bridges for every company involved doesn't make the situation better. To solve this problem, we need a fully distributed architecture, an architecture where every component can be possibly governed by a different business entity. Given that the unit of management in server-based architecture is the server, we can solve the problem by installing a separate server for each component. In such a case we can further optimize the design by making the server and the component share the same processes. What we end up with is a messaging library.
ØMQ was started when we got an idea about how to make messaging work without a central server. It required turning the whole concept of messaging upside down and replacing the model of an autonomous centralized store of messages in the center of the network with a "smart endpoint, dumb network" architecture based on the end-to-end principle. The technical consequence of that decision was that ØMQ, from the very beginning, was a library, not an application.
We've been able to prove that this architecture is both more efficient (lower latency, higher throughput) and more flexible (it's easy to build arbitrary complex topologies instead of being tied to classic hub-and-spoke model) than standard approaches.
One of the unintended consequences was that opting for the library model improved the usability of the product. Over and over again users express their happiness about the fact that they don't have to install and manage a stand-alone messaging server. It turns out that not having a server is a preferred option as it cuts operational cost (no need to have a messaging server admin) and improves time-to-market (no need to negotiate the need to run the server with the client, the management or the operations team).
The lesson learned is that when starting a new project, you should opt for the library design if at all possible. It's pretty easy to create an application from a library by invoking it from a trivial program; however, it's almost impossible to create a library from an existing executable. A library offers much more flexibility to the users, at the same time sparing them non-trivial administrative effort.
Global State
Global variables don't play well with libraries. A library may be loaded several times in the process but even then there's only a single set of global variables. Figure 1 shows a ØMQ library being used from two different and independent libraries. The application then uses both of those libraries.
Figure 1: ØMQ being used by different libraries.
When such a situation occurs, both instances of ØMQ access the same variables, resulting in race conditions, strange failures and undefined behavior. To prevent this problem, the ØMQ library has no global variables. Instead, a user of the library is responsible for creating the global state explicitly. The object containing the global state is called context. While from the user's perspective context looks more or less like a pool of worker threads, from ØMQ's perspective it's just an object to store any global state that we happen to need. In the picture above, libA would have its own context and libB would have its own as well. There would be no way for one of them to break or subvert the other one.
The lesson here is pretty obvious: Don't use global state in libraries. If you do, the library is likely to break when it happens to be instantiated twice in the same process.
When ØMQ was started, its primary goal was to optimize performance. Performance of messaging systems is expressed using two metrics: throughput how many messages can be passed during a given amount of time; and latency how long it takes for a message to get from one endpoint to the other.
Which metric should we focus on? What's the relationship between the two? Isn't it obvious? Run the test, divide the overall time of the test by number of messages passed and what you get is latency. Divide the number of messages by time and what you get is throughput. In other words, latency is the inverse value of throughput. Trivial, right?
Instead of starting coding straight away we spent some weeks investigating the performance metrics in detail and we found out that the relationship between throughput and latency is much more subtle than that, and often the metrics are quite counter-intuitive.
Imagine A sending messages to B (see Figure 2). The overall time of the test is 6 seconds. There are 5 messages passed. Therefore, the throughput is 0.83 messages/sec (5/6) and the latency is 1.2 sec (6/5), right?
Figure 2: Sending messages from A to B.
Have a look at the diagram again. It takes a different time for each message to get from A to B: 2 sec, 2.5 sec, 3 sec, 3.5 sec, 4 sec. The average is 3 seconds, which is pretty far away from our original calculation of 1.2 second. This example shows the misconceptions people are intuitively inclined to make about performance metrics.
Now have a look at the throughput. The overall time of the test is 6 seconds. However, at A it takes just 2 seconds to send all the messages. From A's perspective the throughput is 2.5 msgs/sec (5/2). At B it takes 4 seconds to receive all messages. So from B's perspective, the throughput is 1.25 msgs/sec (5/4). Neither of these numbers matches our original calculation of 1.2 msgs/sec.
To make a long story short, latency and throughput are two different metrics; that much is obvious. The important thing is to understand the difference between the two and their relationship. Latency can be measured only between two different points in the system; there is no such thing as latency at point A. Each message has its own latency. You can average the latencies of multiple messages; however, there's no such thing as latency of a stream of messages.
Throughput, on the other hand, can be measured only at a single point of the system. There's a throughput at the sender, there's a throughput at the receiver, there's a throughput at any intermediate point between the two, but there's no such thing as overall throughput of the whole system. And throughput make sense only for a set of messages; there's no such thing as throughput of a single message.
As for the relationship between throughput and latency, it turns out there really is a relationship; however, the formula involves integrals and we won't discuss it here. For more information, read the literature on queuing theory. There are many more pitfalls in benchmarking the messaging systems that we won't go further into. The stress should rather be placed on the lesson learned: Make sure you understand the problem you are solving. Even a problem as simple as "make it fast" can take lot of work to understand properly. What's more, if you don't understand the problem, you are likely to build implicit assumptions and popular myths into your code, making the solution either flawed or at least much more complex or much less useful than it could possibly be.
Critical Path
We discovered during the optimization process that three factors have a crucial impact on performance:
Number of memory allocations
Number of system calls
Concurrency model
However, not every memory allocation or every system call has the same effect on performance. The performance we are interested in messaging systems is the number of messages we can transfer between two endpoints during a given amount of time. Alternatively, we may be interested in how long it takes for a message to get from one endpoint to another.
However, given that ØMQ is designed for scenarios with long-lived connections, the time it takes to establish a connection or the time needed to handle a connection error is basically irrelevant. These events happen very rarely and so their impact on overall performance is negligible.
The part of a codebase that gets used very frequently, over and over again, is called the critical path; optimization should focus on the critical path.
Let's have a look at an example: ØMQ is not extremely optimized with respect to memory allocations. For example, when manipulating strings, it often allocates a new string for each intermediate phase of the transformation. However, if we look strictly at the critical path the actual message passing we'll find out that it uses almost no memory allocations. If messages are small, it's just one memory allocation per 256 messages (these messages are held in a single large allocated memory chunk). If, in addition, the stream of messages is steady, without huge traffic peaks, the number of memory allocations on the critical path drops to zero (the allocated memory chunks are not returned to the system, but reused repeatedly).
Lesson learned: Optimize where it makes difference. Optimizing pieces of code that are not on the critical path is wasted effort.
Tools To Build Payment-Enabled Mobile AppsJelastic Docker Integration For Orchestrated DeliveryBoost.org Committee Battles Library Log-JamMac OS Installer Platform From installCoreMore News» Commentary
Tools To Build Payment-Enabled Mobile AppsParallels Supports Docker AppsGoogle's Data Processing Model Hardens UpHosting USBMore Commentary» Slideshow
The Most Underused Compiler Switches in Visual C++Developer Reading ListC++ Reading ListDeveloper Reading ListMore Slideshows» Video
The Purpose of HackathonsMaster the Mainframe World ChampionshipTeen Computer Scientist Wins Big at ISEFAmazon Connection: Broadband in the RainforestMore Videos» Most Popular
RESTful Web Services: A TutorialJolt Awards 2015: Coding ToolsRead/Write Properties Files in JavaA Gentle Introduction to OpenCLMore Popular» More Insights
White Papers The 2015 Threat Report Can Your Organization Brave the New World of Advanced Cyber Attacks? More >> Reports Strategy: The Hybrid Enterprise Data Center Research: Federal Government Cloud Computing Survey More >> Webcasts Advanced Threat Protection For Dummies ebook and Using Big Data Security Analytics to Identify Advanced Threats Webcast Defense Against the Dark Arts More >> INFO-LINK
Intrusion Prevention Systems: What to Look for in a Solution Agile Desktop Infrastructures: You CAN Have It All 5 Reasons to Choose an Open Platform for Cloud How to Stop Web Application Attacks Big Data and Customer Interaction Analytics: How To Create An Innovative Customer Experience More Webcasts>>
Cloud Collaboration Tools: Big Hopes, Big Needs Return of the Silos State of Cloud 2011: Time for Process Maturation Research: Federal Government Cloud Computing Survey Database Defenses More >> | 计算机 |
2015-48/0322/en_head.json.gz/7804 | Research shows that computers can match humans in art analysis
Jane Tarakhovsky is the daughter of two artists, and it looked like she was leaving the art world behind when she decided to become a computer scientist. But her recent research project at Lawrence Technological University has demonstrated that computers can compete with art historians in critiquing painting styles.
While completing her master’s degree in computer science earlier this year, Tarakhovsky used a computer program developed by Assistant Professor Lior Shamir to demonstrate that a computer can find similarities in the styles of artists just as art critics and historian do.
In the experiment, published in the ACM Journal on Computing and Cultural Heritage and widely reported elsewhere, Tarakhovsky and Shamir used a complex computer algorithm to analyze approximately1,000 paintings of 34 well-known artists, and found similarities between them based solely on the visual content of the paintings. Surprisingly, the computer provided a network of similarities between painters that is largely in agreement with the perception of art historians.
For instance, the computer placed the High Renaissance artists Raphael, Da Vinci, and Michelangelo very close to each other. The Baroque painters Vermeer, Rubens and Rembrandt were placed in another cluster.
The experiment was performed by extracting 4,027 numerical image context descriptors – numbers that reflect the content of the image such as texture, color, and shapes in a quantitative fashion. The analysis reflected many aspects of the visual content and used pattern recognition and statistical methods to detect complex patterns of similarities and dissimilarities between the artistic styles. The computer then quantified these similarities.
According to Shamir, non-experts can normally make the broad differentiation between modern art and classical realism, but they have difficulty telling the difference between closely related schools of art such as Early and High Renaissance or Mannerism and Romanticism.
“This experiment showed that machines can outperform untrained humans in the analysis of fine art,” Shamir said.
Tarakhovsky, who lives in Lake Orion, is the daughter of two Russian artists. Her father was a member of the former USSR Artists. She graduated from an art school at 15 years old and earned a bachelor’s degree in history in Russia, but has switched her career path to computer science since emigrating to the United States in 1998.
Tarakhovsky utilized her knowledge of art to demonstrate the versatility of an algorithm that Shamir originally developed for biological image analysis while working on the staff of the National Institutes of Health in 2009. She designed a new system based on the code and then designed the experiment to compare artists.
She also has used the computer program as a consultant to help a client identify bacteria in clinical samples.
“The program has other applications, but you have to know what you are looking for,” she said.
Tarakhovsky believes that there are many other applications for the program in the world of art. Her research project with Shamir covered a relatively small sampling of Western art. “this is just the tip of the iceberg,” she said.
At Lawrence Tech she also worked with Professor CJ Chung on Robofest, an international competition that encourages young students to study science, technology, engineering and mathematics, the so-called STEM subjects.
“My professors at Lawrence Tech have provided me with a broad perspective and have encouraged me to go to new levels,” she said.
She said that her experience demonstrates that women can succeed in scientific fields like computer science and that people in general can make the transition from subjects like art and history to scientific disciplines that are more in demand now that the economy is increasingly driven by technology.
“Everyone has the ability to apply themselves in different areas,” she said. | 计算机 |
2015-48/0322/en_head.json.gz/8020 | uDraw GameTablet for PS3 pictures and hands-on
THQ's uDraw GameTablet has already been available on the Nintendo Wii for a while (since March, to be precise) but now it's coming for PS3 and Xbox 360, and features a few new bells and whistles, to boot.
For starters, it's dubbed "high definition" by the games firm as, for the first time, you'll be able to paint and see your creations in HD - the Wii is only an SD console. Plus, the new uDraw has tilt controls added, and pinch and zoom functionality like on an iPad, Android tablet or Apple Magic Trackpad.
But maybe we're getting ahead of ourselves here; what exactly is the uDraw, and what does it offer that you don't already have on your games console?
Well, Pocket-lint was invited to have a play around with one (the PS3 version - although, bar a couple of cosmetic differences, we've been reliably informed that the Xbox 360 one is identical), and in the most basic of terms, it's a wireless graphics tablet for your games machine, allowing you to draw with a stylus and see the results on your telly.
Basic stuff, you probably think and, essentially, you'd be right. The target audience or, at least, the people that will get the most enjoyment out of the uDraw are kiddies. But that doesn't mean that it's exclusive.
While there are obvious applications for younger members of the family, such as the colouring in sections of included paint software uDraw Studio: Instant Artist, there are also games available; some on the free software, and some to buy as add-ons.
For example, uDraw Pictionary: Ultimate Edition is a fun-packed version of the drawing game that even comes with different game formats, and subject packs that are for the old'uns (featuring complicated, rather than rude words).
There will be others on their way too, with the older Wii version already boasting a healthy library of available titles.
Back to the tablet itself though; it's sturdy, the PS3 and Xbox 360 versions both feature control buttons and a direction pad (the Wii version requires you to insert a Wii Remote) so you don't have to fiddle around with a normal controller in menu systems or the console's own user interface.
The stylus/pen is chunky and looks like it could take a bashing, plus it has 256 points of pressure sensitivity so, software willing, can draw precise pictures.
And there's even a uDraw website (www.worldofudraw.com) that you can save and send your pictures to, then seed them out to social networking sites.
The uDraw GameTablet with uDraw Studio: Instant Artist will be out on 18 November for PS3 and Xbox 360, priced around £70.
Available separately, uDraw Pictionary: Ultimate Edition will be on sale from 18 November, for around £30.
- Check out our guide to the hottest Xbox 360 games for Christmas and beyond
uDraw GameTablet
Published: 11 November 2011 9:49 | 计算机 |
2015-48/0322/en_head.json.gz/8615 | Windows “Vienna:” hypervisor being considered
By bringing full virtualization to the desktop operating environment, …
by Ken Fisher
Ben Fathi, VP of Development in Microsoft's Windows Core Operating System Division, says that the next version of Windows (which may or may not be codenamed "Vienna") is coming at the close of 2009. Fathi's comments came at last week's RSA Conference in San Francisco, but the real story isn't the shipping target that the mainstream newswires are fixating on. We all know that those dates are meaningless. Instead, Fathi's hint at what Microsoft has planned for the future is far more juicy. "We're going to look at a fundamental piece of enabling technology. Maybe its hypervisors, I don't know what it is," Fathi said according to InfoWorld. "Maybe it's a new user interface paradigm for consumers." Fathi's hint that Microsoft is considering "hypervisors" echoes similar claims we've been hearing from sources about the future of Windows and really stands out as leading candidate for a significant new OS "feature." Nothing is set in stone right now, and Fathi clearly implies that going hypervisor is just one option. What could Microsoft hope to gain by bringing this level of virtualization to the desktop? OS-level virtualization such as that found with Virtual PC or Parallels for OS X is great for allowing users to run other operating systems or setup development environments. Virtualization applications effectively keep those guest operating systems in a sandbox, making it impossible for them to damage the host OS. Also great, you can abuse a guest OS and yet it is as good as new the next time you want to run it. OS-level virtualization solutions aren't perfect, though. Performance for certain kinds of tasks is lacking, especially with accelerated graphics and intensive I/O such as hard disk use. This is why solutions that run on top of an OS are losing the server virtualization war: better performance can be had by virtualizing the hardware directly, rather than the host OS's resources.
Often called "full virtualization," this runs directly on the hardware in question and virtualizes it, providing a common hardware base to all operating environments running atop that virtualized hardware. The virtualization software is designed to handle hardware arbitration and optimize resource sharing, and that's basically it. This allows for considerably improved performance, and both AMD and Intel are working hard to promote this kind of virtualization in Pacifica and Vanderpool, respectively. By bringing full virtualization to the desktop operating environment, Microsoft could accomplish two rather lofty goals. First, virtualization is ideal for keeping one user from ruining or dominating any particular system while giving them near-full access to that same system. This would be a boon for multi-user systems such as the PC that's shared by a family or a business. In fact, much like "profiles" can move around an Active Directory environment today, with full virtualization, a complete "system" could follow you around, provided a minimum standard of hardware be met. Even better, system maintenance and patching could be standardized in mixed-hardware environments. Second, Microsoft could use a virtualization environment to start digging the graves of older Windows applications. It seems like it was only yesterday that Apple cut the cord on the original Mac OS and headed into completely new waters with Mac OS X. Unquestionably, the strategy paid dividends for Apple. Mac OS X is fast, secure, very scriptable, and some people even like using it. While the jump to OS X did come with plenty of sad moments, one of them wasn't kissing OS 9 goodbye. By relegating OS 9 to a "classic emulator" and trapping it in a sandbox, Apple could focus entirely on the future with OS X. Just as important, application developers could see the writing on the wall: its was time to get with the program and get moving code to the newer, superior Cocoa and Carbon frameworks. If they didn't, they risked being left behind..
Microsoft, too, needs to make a break with the past, and virtualization is probably the best way to begin to do it. A full virtualization solution in 2009 could be robust enough to handle legacy code, perhaps even gaming applications, on x86 hardware. The technology is shaping up, and we have high hopes that hardware-level acceleration for graphics will be virtualized soon. Intel’s next generation of hardware virtualization technology, Intel Virtualization Technology for Directed I/O (VT-d), should make this possible. It's no secret that one of Microsoft's most formidable challenges is dealing with backwards compatibility. I do consulting for IT shops that still rely on applications and services that are more than 20 years old (and in the world of Windows, that is a long time). I've met gamers who have no intention of giving up their favorite DirectX 7 games. A full virtualization solution could allow for high-performance backwards compatibility while still breaking with the past, and let's face it: a truly clean start would be a welcome thing in the world of Windows programming interfaces. Could it happen by 2009? Probably not, but since when have shipping dates fazed Microsoft? Whatever the case, we know that Microsoft is thinking about full virtualization, and that Fathi even went so far as to suggest that it could be deployed as a "fundamental piece of enabling technology."
Ken Fisher / Ken is the founder & Editor-in-Chief of Ars Technica. A veteran of the IT industry and a scholar of antiquity, Ken studies the emergence of intellectual property regimes and their effects on culture and innovation. | 计算机 |
2015-48/0322/en_head.json.gz/9106 | How can I make puzzles a challenge for the character, rather than the player?
I'm designing an adventure, in which at one point the party meets a giant who will only let them pass if they solve some puzzles. However, the problem is that the riddles are a test for the player, not for the character- a very bright person playing a character with average intelligence would probably solve the riddles, while his character may not. I've considered having the players roll a DC 10 Intelligence check after solving the riddles, but that doesn't seem fair.
So my question is, how can I make puzzles a challenge for the character, instead of the player?
I don't want advice tied too deeply into a particular game's mechanics - this is a common problem found in most of the trad games I've played. It tends to reduce to "player figures it out" or "player makes some roll to solve it without needing to think about it," both of which have their problems.
gm-techniques system-agnostic adventure-writing puzzle share|improve this question edited Jul 29 '14 at 2:54
mxyzplk♦
@mxyzplk - Nice edit to make this actually work as a system-agnostic question.
– Bobson
IMO, it's more fun when the puzzles are a challenge for the player as well. I'd love to find a solution where the players are challenged to solve the puzzle, AND where the characters are somehow challenged to "come up with" the right solution.
– PurpleVermont
May 2 at 20:18
Ive been in a game where most if not all the puzzles were for the players, and the rest of the group were generally not upto the task often. So often it came down to me attempting to figure out the solution. Having a character provide some more hints would have been very helpful and less frustrating.
– Fering
Jun 2 at 17:05 | 计算机 |
2015-48/1889/en_head.json.gz/10679 | The World of Aftershock
Cipher mages
Sigil-bearers
Shockfront.net
Create account or Sign in Home
RP Logs
Twitter - (What's this?)
Join This Wiki
How to edit pages?
Site Manager
This site designed and tested in Firefox
Ray McKenzie
Known Aliases:
Ray-Ban, Amp
210 lb.
Dark blonde
Field operative, Shockfront Alpha (Shockfront Initiative, maintains cover job as security consultant for Kx International.)
B.A. in English, University of California, San Diego
Capitola, California
Known Relatives:
Alicia McKenzie (mother), Joseph McKenzie (father, deceased)
Powers/Abilities:
Electricity manipulation (electrokinesis)
Played by:
Zxaos
Ray McKenzie is a sigil-bearer character played by Zxaos.
Ray was born and raised in Capitola, California. His father died when he was ten years old, leaving him to be raised, an only child, by his mother. While not exactly precarious, the family's financial situation was poor, with Ray's mother often having to work long hours to get by. As a child, Ray often had very little money and himself held a part-time job from a young age to as to be able to keep up with his friends, for whom surfing was the popular pastime.
Ray worked for a year to raise some money before enrolling in a B.A. in English program at the University of California, San Diego.
Following his graduation from university, Ray moved to San Diego permanently and was hired as a flight attendant, as he wanted to travel both within the US and internationally but did not have the means to do so. This job allowed him, at least briefly, to experience the sights and sounds of many major cities in the the US and eventually some other countries. During this period he continued pursuing his interests in surfing, as well as learning how to play the guitar.
When the Shockwave occurred in 2009, Ray was luckily at home. He was sigil-branded on his upper right leg, and had significant difficulty controlling his abilities in the first few months afterwards, sometimes having difficulty being near electronics without damaging them, and having moderate pain when exposed to large amounts of water. Due to the risk of having him on-board a plane, he was let go from his job as a flight attendant. Forced to give up (temporarily) his surfing as well until control over his abilities grew, Ray began bouncing from job to job and also resorted to stealing money from ATMs using his abilities on a few occasions when he could not make ends meet. Even after he had his abilities well under control, he was not able to resume his job at the airline due to liability reasons.
Recruited into the Shockfront Initiative in 2013, Ray was one of the first set of members that joined to form the original Shockfront Alpha team, along with Jax Tempest, Anna O'Hare, Mason Paulis, Michael Thorne, Aria Daylewis, and Cole LeGault. For this role he, along with the other members of the team, was given extensive training in modern combat, martial arts, tactics, and training in the use of their sigil power.
Ray is outgoing and fun, as well as loyal and protective toward those he cares about. He also enjoys parties, pranks, and practical jokes.
Ray's powers centre around the use and manipulation of electricity. His current powers include the ability to exude electric bolts from any part of his body, or simply electrify his skin. He can manipulate the strength of both of these effects as well as localize them to one body part. He also has the ability, when near death, to convert his entire body to electricity and travel for short distances through wires. This ability appears to be beyond his conscious control, and is very taxing.
page revision: 28, last edited: 09 Jun 2010 23:39
Powered by Wikidot.com Click here to edit contents of this page. Click here to toggle editing of individual sections of the page (if possible). Watch headings for an "edit" link when available. Append content without editing the whole page source. Check out how this page has evolved in the past. If you want to discuss contents of this page - this is the easiest way to do it. View and manage file attachments for this page. A few useful tools to manage this Site. See pages that link to and include this page. Change the name (also URL address, possibly the category) of the page. View wiki source for this page without editing. View/set parent page (used for creating breadcrumbs and structured layout). Notify administrators if there is objectionable content in this page. Something does not work as expected? Find out what you can do. General Wikidot.com documentation and help section. Wikidot.com Terms of Service - what you can, what you should not etc. Wikidot.com Privacy Policy. | 计算机 |
2015-48/1889/en_head.json.gz/11025 | Centos 4.4 x86-64 DVD
release date:September 2006
Johnny Hughes has announced the availability of a fourth update to CentOS 4 series, a Linux distribution built from source RPM packages for Red Hat Enterprise Linux 4: "The CentOS development team is pleased to announce the release of CentOS 4.4 for i386. This release corresponds to the upstream vendor U4 release together with updates through August 26th, 2006. CentOS as a group is a community of open source contributors and users. Typical CentOS users are organisations and individuals that do not need strong commercial support in order to achieve successful operation. CentOS is 100% compatible rebuild of the Red Hat Enterprise Linux, in full compliance with Red Hat's redistribution requirements. CentOS is for people who need an enterprise class operating system stability without the cost of certification and support.
CentOS is a freely available Linux distribution which is based on Red Hat's commercial Red Hat Enterprise Linux product. This rebuild project strives to be 100% binary compatible with the upstream product, and within its mainline and updates, to not vary from that goal. Additional software archives hold later versions of such packages, along with other Free and Open Source Software RPM based packagings. CentOS stands for Community ENTerprise Operating System.
Red Hat Enterprise Linux is largely composed of free and open source software, but is made available in a usable, binary form (such as on CD-ROM or DVD-ROM) only to paid subscribers. As required, Red Hat releases all source code for the product publicly under the terms of the GNU General Public License and other licenses. CentOS developers use that source code to create a final product which is very similar to Red Hat Enterprise Linux and freely available for download and use by the public, but not maintained or supported by Red Hat. There are other distributions derived from Red Hat Enterprise Linux's source as well, but they have not attained the surrounding community which CentOS has built; CentOS is generally the one most current with Red Hat's changes.
CentOS preferred software updating tool is based on yum, although support for use of an up2date variant exists. Each may be used to download and install both additional packages and their dependencies, and also to obtain and apply periodic and special (security) updates from repositories on the CentOS Mirror Network.
CentOS is perfectly usable for a X Window based desktop, but is perhaps more commonly used as a Server Operating system for Linux Web Hosting Servers. Many big name hosting companies rely on CentOS working together with the cPanel Control Panel to bring the performance and stability needed for their web-based applications. manufacturer website
Major changes for this version are: Mozilla has been replaced by SeaMonkey
Ethereal has been replaced by Wireshark
Firefox and Thunderbird have moved to 1.5.x versions
OpenOffice.org has moved from to the 1.1.5 version
1 DVD for x86-64 based systems back to top | 计算机 |
2015-48/1889/en_head.json.gz/11055 | Meta: a site with lots of “Thank you” questions?
Yes, the title is a based off of Questions with lots of "Thank you" answers. Because to me, it's a related issue, although scoped to Meta specifically. Two posts today have come up, regarding the close UI and the error report. They're not actually questions, they're just stating thanks.
Don't get me wrong. I think it's a good idea that people are finding functionality on the sites useful, and are expressing their thanks. But... is posting a new question really the way to do it? And should we really be rallying behind these people, voting them up because they're publicly announcing their gratitude? Aren't these kinds of questions pretty much as useless as answers that serve no purpose but to say "Thanks"?
At the end of the day, feedback on the sites is part of Meta's scope. Although gratitude is a form of feedback, it isn't one suited to the engine. You can't "discuss" gratitude (what, do you want people to convince you that you shouldn't be thanks?), they're about things that are implemented so they don't fall under the other categories either. It's strictly material that fits in comments, I know I express my gratitude in that fashion. If you really want the team to hear your heartfelt words, then you can also email them. The address is always in the footer. That's my take on this.
But does the community prefer that we allow these kinds of non-questions? Feedback is always welcome here in Meta, according to the footer... do we wish to become a site with lots of "Thank you" questions?
What are your thoughts? | 计算机 |
2015-48/1889/en_head.json.gz/12027 | “I began designing typefaces in the early ’90s because there weren’t many typefaces available to us in those days,” Dino dos Santos, founder of DSType, said in his Creative Characters interview. “I started designing fonts that matched the new typographic experience. To me, graphic design was never about taking a picture and then just choosing one of the available typefaces”
Based in Porto, Portugal, Dino got his start designing typefaces for magazines and large corporations. Frustrated that the only fonts available for use were system fonts and dry transfer sheets, he began selling his typefaces on MyFonts. Since then, the self-taught designer has created a library full of striking experiments, charming display type, and most notably, an amazing collection of well-wrought, extensive text families. His collection also boasts a handful of bestsellers such as Velino Text, Prelo Slab and Prumo Slab.
“There is not much of a type design history in Portugal,” he noted in his interview. He is, however, interested in what has been done in his country by older generations of type designers and calligraphers. “I want to understand what happened, how things worked back then, and expose the world to some lesser-known work. History is often seen as something that passed away, and that’s it. But for me history is one of the most relevant aspects of type design. I believe we are made of history, but also that we should take a step forward by connecting it to the present and the future and we can do that through technology.”
Rua Oscar da Silva 1234 4 Esq. Leca da Palmeira Matosinhos, 4450-754
Portugal phone: +351 916 166 740 Related links
MyFonts Creative Characters: Dino dos Santos
126 font families from DSType | 计算机 |
2015-48/1889/en_head.json.gz/12789 | CandySwipe Developer Battling Candy Crush Saga Over His Trademark
The developer of CandySwipe, Albert Ransom, has penned an open letter to King, who is attempting to cancel the trademark on his app.
CandySwipe was originally released on Android in 2010, two years before King’s Candy Crush Saga even existed.
And as you can see, a number of elements from Candy Crush Saga are very similar to CandySwipe.
Here’s part of the letter:
I have spent over three years working on this game as an independent app developer. I learned how to code on my own after my mother passed and CandySwipe was my first and most successful game; it's my livelihood, and you are now attempting to take that away from me. You have taken away the possibility of CandySwipe blossoming into what it has the potential of becoming. I have been quiet, not to exploit the situation, hoping that both sides could agree on a peaceful resolution. However, your move to buy a trademark for the sole purpose of getting away with infringing on the CandySwipe trademark and goodwill just sickens me.
This also contradicts your recent quote by Riccardo in "An open letter on intellectual property" posted on your website which states, "We believe in a thriving game development community, and believe that good game developers – both small and large - have every right to protect the hard work they do and the games they create."
I myself was only trying to protect my hard work.
I wanted to take this moment to write you this letter so that you know who I am. Because I now know exactly what you are. Congratulations on your success!
Back in January, King’s trademark request for the word “candy” as it pertains to videos and clothing was approved by the U.S. Patent and Trademark Office.
After that, King began to ask developers with the phrase in their title to remove their app or prove that it doesn’t infringe on their trademark.
If you’re interested in playing the iOS version of CandySwipe, you can download it now in the App Store for $2.99. It’s a universal app for the iPhone/iPod touch and iPad/iPad mini.
For other app news today, see: Tocomail Offers A Fun And Safe Way For Kids To Use Email, Metadata+ Is A Drone Strike App, 'Whether Apple Likes It Or Not,' and Fall Out Boy May Have A Cure For The Flappy Bird Blues.
by King.com Limited
CandySwipe®
by Runsome Apps
King Casts Romantic Spell On Bubble Witch Saga For Valentine's Day
Developers Protest King’s ‘Candy’ Trademark By Creating Some Sweet Games | 计算机 |
2015-48/1889/en_head.json.gz/12871 | C0DE517E
Rendering et alter...
2011 Current and Future programming languages for Videogames
There are so many interesting langauges that are gaining popularity these days, I thought it could be interesting to write about them and how they apply to videogames. I plan to do this every year, and probably to create a poll as well.
If I missed your favorite language, comment on this article, so at least I can include that in the poll!
Now, before we start, we have to define what "videogames" we are talking about. Game programming has always been an interdisciplinary art: AI, graphics, systems, geometry, tools, physics, database, telemetry, networking, parallel computing and more. This is even more true nowadays that we everything turned into a gaming platforms: browsers, mobile devices, websites and so on. So truly, if we talk about videogames at large there are very few programming languages that are intresting but not relevant to our business.
So I'll be narrowing this down to languages that could or should be considered by an AAA game studio, working on consoles (Xbox 360 and PS3) as its primary focus, while maybe still keeping an eye on PC and Wii. Why? Well mostly because that's the world I know best. Let's go.
Where and why we need them: Code as Data, trivial to hotswap and live-edit, easier for non-programmers. Usually found in loading/initialization systems, AI and gameplay conditions or to define the order of operation of complex sub-systems (i.e. what gets rendered in a frame when). Scripting languages are usually interpreted (some come with optional JITs) so porting to a new platform is usually not a big deal. In many fields scripting is used to glue together different libraries, so you want a language with a lot of bindings (Perl, Python). For games though, we are more interested in embedding the language and extending it, so small, easy to modify interpreters are a good thing.
Current champion and its strenghts: Lua (a collection of nice presentations about it can be found on the Havok website). It's the de-facto standard. Born as a data definition language it can very well replace XML (bleargh) as an easier-to-parse, more powerful way of initializing systems. It's also blessed with one of the fastest interpreters out there (even if it's not so cool on the simpler in-order cores that power current consoles and low-power devices) and with a decent incremental garbage collector. Easy to integrate, easy to customize, and most people are already familiar with its syntax (not only because it's so popular, but also because it's very similar to other popular scripting languages inspired by ECMAScript). Havok sells an optimized VM (Havok Script, previously called Kore). Why we should seek an alternative: Lua is nice but it's not born for videogames, and sometimes it shows. It's fast but not fast enough for many tasks, especially on consoles (even if some projects , like Lua-LLVM, lua2c and MetaLua could make the situation better). It has a decent garbage collector but still it generates too much garbage (it allocates for pretty much everything, it's possible to use it in a way that minimizes the dynamic allocations but then you'll end up throwing away much of the language) and the incremental collector pauses can still be painful. It's extensible but you can only easily hook up C functions to it (and the calling mechanism is not too fast) while you need to patch its internals if you want to add a new type. Types can be defined in Lua itself, but that's seldom useful to games. There is a very cool JIT for it, but it runs on very few platforms.
Personally I'd trade many language features (OOP, Coroutines, Lambdas, Metamethods, probably even script-defined structures or types) for more performance on consoles and easier extensibility with custom types, native function calls (invoking function pointers directly from the script without the need of wrappers) etc...
Present and future alternatives: IO. It's a very nice language, in some way similar to Lua (it has a small VM, an incremental collector, coroutines, it's easy to embed...) but with a different (more minimal) syntax. Can do some cool things like binding C++ types to the language, it supports Actors and Futures, it has Exceptions and native Vector (SIMD) support.
Stackless Python. Python, with coroutines (fibers, microthreads, call them as you wish) and task serialization. It's Python. Many people love python, it's a well known language (also used in many tools and commercial applications, either via CPython or IronPython) and it's not one of the slowest scripting languages around (let's say, it's faster than Ruby, but slower than Lua). It's a bigger language, more complex to embed and extend. But if you really need to run many scripted tasks (see Grim Fandango and Eve Online presentations for examples of games using coroutines), it might be a good idea.
JavaScript. ActionScript, another ECMAScript implementation, is already very commonly found in games due to the reliance on Flash for menus. HTML5 looks like a possible contender, and modern browsers are always pushing to have faster and faster JavaScript JIT engines. Some JITs are only available for IA-32 platforms (like Google V8) but some others (like Mozilla TraceMonkey/JaegerMonkey) support PPC as well. The language itself is not really neat and it's full of pitfalls (Lua is much cleaner) but it's usable (also, there are languages that compile to JavaScript and that "clean it up" like coffeeScript and haXe). VMs tend to be big and not easy to understand or extend.
Scheme. Scheme is one the the two major lisp dialects (the other one being Common Lisp). It's very small and "clean". Easy to parse, not to hard to write an interpreter, and easy to extend. It's Lisp, so some people will love it, some will totally hate it. There are quite a few intepreters (Chicken, Bigloo, Gambit) that also come with a scheme-to-C compiler and some (like YScheme) that have short-pause GC for realtime applications, and that's quite lovely when you have to support many different platforms. Guile is a scheme interpreter explicitly written to be embedded.
TCL. Probably a bit "underpowered" compared to the other languages, it's born to be embeddable and extensible, almost to the point that it can more be seen as a platform for writing DSL than as a language. Similar to Forth, but without the annoying RPN syntax. Not very fast, but very easy.
Gaming-specific scripting languages. There are quite a few languages that were made specifically for videogames, most of them in reaction to Lua, most of them similar to Lua (even if they mostly go for a more C-like syntax). None of them is as popular as Lua, and I'd say, none of them emerges as a clear winner over Lua in terms of features, extensibility or speed. But many are worth considering: Squirrel, AngelScript, GameMonkey, ChaiScript, MiniD.
Other honorable mentions. Pawn is probably the closer to what I'd like to have, super small (it has no types, variables are a 32bit "cell" that can hold an integer or can be cast to a float, a character or a boolean) and probably the fastest of the bunch, but it seems to be discontinued as its last release is from 2009. Falcon is pretty cool too, but it seems to be geared more towards being a "ruby" than a "lua" (that's to say, a complete, powerful multi-paradigm language to join libraries instead of an extension, embedded language) even if they claim to be fast and to be easy to embed. Last, I didn't investigate it much, but knowing Wouter's experience in crafting scripting languages, I won't be surprised if CubeScript was a hidden gem.
Roll your own. Yes, really, I'm serious. It's not that hard, especially using a parser-generator tool. AntLR is one of the best and easiest (see this video | 计算机 |