Daniel's Web 2.0 related Blog

Perpetual Beta

28/Apr/2010
Leave a Comment

This week, I will be looking at Perpetual Beta in terms of the advantages it offers for software development.  Perpetual Beta has undoubtedly changed the way that products are planned, developed and ultimately marketed to the end user – in that the release cycle has become less rigid and more adaptable to the needs of the end users.

Some examples of the success of Perpetual Beta can be seen in Google Maps (covered in the Week 8 lecture) – the long public beta gave Google the user feedback it needed to refine its application before publishing it as a full product – which has since been adopted in mashups and on some smartphone devices. 

Most importantly, this “extended” beta process achieved greater levels of functionality without sacrificing the quality of a product that has become ubiquitous.


Posted in Uncategorized

Software Above the Level of a Single Device

18/Apr/2010
2 Comments

This week, we looked at software that goes above the level of a single device.  In simple terms, devices that perform the functions of two or more unique devices a few years ago contain software that makes this possible.  Some examples were outlined, and I took from these examples and did some of my own research – ultimately to determine what else in the market fits this criteria.

A current example of software that goes above the level of a single device is that contained in the new Nintendo DSi XL.  The latest handheld console from Nintendo allows traditional games to be played, with the added advantage of internet access through its embedded browser.  It is an excellent example of a device with software that goes beyond the level of a single device because of its interconnectivity with wireless networks.

It also takes software interoperability to a new level with its support for older GameBoy Advance games through what Nintendo calls its DSiShop.  This is a strong practice that is being used by most console manufacturers – and adds life to games that would’ve traditionally become obselete and difficult to obtain with the development of newer consoles.

For a summary of its features, please check out http://nintendods.com/meet-dsi-xl-specs.jsp


Posted in Uncategorized

Richer Web Experiences

27/Mar/2010
2 Comments

This week, we looked at Richer Web Experiences and how they have entered almost every facet of the web.  As much as I could write a blog about richer web experiences sounding the death knell for desktop applications, I have actually found it more interesting to research some of the different technologies that have been developed to bring richer web experiences.

In fact, I recently came across this survey at StatOwl (http://www.statowl.com/custom_ria_market_penetration.php).  There are two things that this survey demonstrates:

  1. That Adobe Flash is has maintained its popularity among web developers
  2. That Microsoft Silverlight is slowly gaining market share

To get an idea of what the differences are, and how Flash and SilverLight deliver richer web experiences, I did some research online.  I went to a gallery that featured Flash and Silverlight activities – see:  http://www.shinedraw.com/flash-vs-silverlight-gallery/

Interestingly enough, I found that Flash seems to offer better responsiveness than SilverLight.  For example, the throwing object example works in the following ways:

  • Flash: Moving the mouse at different speeds results in the objects being thrown more naturally (faster or slower, depending on mouse movement)
  • Silverlight: The objects aren’t as responsive to mouse slides and require the mouse to be held down when moving across the screen

As was demonstrated in the lecture earlier this week, performance and user functionality is very important in platforms that develop richer web experiences to the end user.  So with this in mind, the question I have is: can the dominance of Flash for delivering richer web experiences really be challenged by Microsoft?


Posted in Uncategorized

Innovation in Assembly

20/Mar/2010
2 Comments

This week, we looked at Innovation in Assembly.  Basically, it explained how the future of API development will follow a more open, collaborative development structure.  We looked at how Amazon developed an API for other web businesses to develop cost-effective resources based on the Amazon model.

A great example of this practice is Google Maps (http://maps.google.com.au/maps?ct=reset).  It best reflect how the medium used to display maps have evolved over time.  Starting with the printed reference maps (e.g. UBD), this evolved to proprietry mapping software in units, such as GPS navigators.  As a result of this move, map licencing has become profitable for businesses, such as Navteq (who develop comprehensive maps for GPS navigators, printed maps etc.)

Eventually, Google developed their Google Maps service, which provides free mapping and a range of other service, like directions between points on the map.  The popularity of this service, combined with it’s web-based map database makes it an attractive option for businesses that take advantage of GPS.

This service links to Innovation in Assembly, as other businesses have integrated the Google Maps service into their applications, such as HP and their iPaq smartphone range (example: iPAQ Voice Messenger – http://h10010.www1.hp.com/wwpc/au/en/sm/WF06a/215348-215348-64929-3352590-3352590-3806508.html).  By providing a cost effective platform, the innovative service offers greater exposure for Google’s products, while reducing development costs for businesses.  After all, if HP used Navteq maps, HP would have to pay a licence fee, which would be passed onto the end user.  In this respect, integrating Google Maps into iPaq smartphones (with GPS enabled) is a smart move for consumers.

If you can think of another good example, please add it in the comments! 🙂


Posted in Uncategorized

Data is the next Intel Inside

11/Mar/2010
4 Comments

This blog entry is about the future of data and its effect on Web 2.0 applications.  In the lecture, we learned that Web 2.0 as we know it relies (in part) on open-source software and code.  Many elements of web access are already provided by open-source software – from Apache servers (that host the web content) to database management tools, like MySQL.

There can be little doubt that open-source development is on the rise.  In fact, according to Deshpande & Riehle (http://dirkriehle.com/publications/2008/the-total-growth-of-open-source/), it accounts for a large portion of the web server market.  To add to this, they describe the growth of open-source development tools as “exponential.”  The question is, are open-source applications providing the platform for a more “open”, free Internet?

Unfortunately, the answer is most likely no.  The reason is because open-source technologies are being absorbed by new IT giants, who have formed as a result of Web 2.0.  Like Intel in processor development, Google has a significant proportion of market share, as shown through by various measures (such as Nielsen polls).  But it could be argued that Google will only support the idea of open internet while it serves the interest of the business.  In fact, in an article about the strength of Google, Messina argues that “Google decides which ports it wants to open and for whom.” (http://factoryjoe.com/blog/2006/08/20/building-a-better-mouse-trap/). 

All of the above relates to the concept of “Data as the next Intel Inside” because, as O’Reilly argues, “…data as the Intel inside is the one that will close [web 2.0] down” (http://radar.oreilly.com/2006/09/open-data-small-pieces-loosely.html) – what this means is that the first business that can take the open-source content and utilise it to serve their own interests will influence the future development (or control, depending on your ideology) of Web 2.0, just as Intel has dominated PC processor development, almost uninterrupted, for almost 20 years. 

In this respect, perhaps Google, rather than the ubiquitous “Data” is the next “Intel Inside”.

Please feel free to comment and offer your views!


Posted in Uncategorized

Harnessing Collective Intelligence

03/Mar/2010
Leave a Comment

It could be argued that “Collective Intelligence” summarises the power of the internet – both for the individuals who use it and for corporate interests trying to wield its power.  The lecture on Harnessing Collective Intelligence raised issues surrounding the latter point.  In particular, it compared the best practices of businesses and organisations, pointing out how differing methods have been developed to develop new information resources, based on the input of users around the world.  The key issue is how.

In the lecture, three distinct methods of creating “something” were listed, including:

  • “Paying people to develop information” – a practice that has been adopted from the “real” world – e.g. book publishers paying authors for the right to sell their intellectual property
  • “Allowing volunteers to provide information” – Wikipedia was cited as an example
  • “Creating or providing existing information as a result of using Web 2.0 to serve individual needs”

Using the methods of publishing information above, it could be argued that each method has value in Web 2.0.  Referring to volunteers providing information to shared databases, Bricklin (2006) argues, “interested individuals provide the data because they feel passionate enough about doing so.”

Despite this, it is clear that the prominent method for businesses to develop and market information will come as a result of utilising existing information gathering businesses and technologies.  According to Bricklin (2006), more and more information will come as by-products of past developments (such as street maps) – taking the street map example, this means that the task of developing and maintaining maps will become the responsibility of a few firms (e.g. Navteq) who are already selling their maps to GPS manufacturers, which eliminates the need for these businesses to develop their own digital maps for their devices.  Even Whereis, a popular online street directory, sources its mapping information from UBD and Telstra (http://www.whereismaps.com/about-our-maps.aspx).

This actually has positive implications for copyright, as it shows that businesses are prepared to pay a licensing fee to information gathering businesses, while saving the cost of traditional information building (including paying a staff to gather the information independently).

Ultimately, “Collective Intelligence” gives new power to businesses, which can provide more innovative services to a growing online market without the traditional costs associated with developing the necessary information and content.


Posted in Uncategorized

My blog

24/Feb/2010
Leave a Comment

Well, here it is – a new blog – the second social networking tool that I’ve signed up to in a week… it was this or my MySpace blog, but WordPress seems straightforward…


Posted in Uncategorized

Hello world!

24/Feb/2010
1 Comment

Welcome to WordPress.com. This is your first post. Edit or delete it and start blogging!


Posted in Uncategorized

About author

I am an I.T./Education undergraduate at Queensland University of Technology (QUT)

Search

Navigation

Categories:

Links:

Archives:

Feeds