Thursday, 9 December 2010

Coursework post 2


Using the Internet and evolving standards and technologies associated with the WWW to publish information in effective and accessible ways.

For this I am going to look at Web 2.0 and its use in libraries. From the lectures we received and the reading I have done in the area of web 2.0 and the much touted ‘Library 2.0’ I learned that many libraries where previously they may have been ignoring, avoiding or fighting Web 2.0 are now implementing some 2.0 activity. One of the most common is having Facebook pages for the library and also publishing blogs, wikis and tweeting about upcoming events/guides/faq’s etc . Some also make use of virtual ‘ask a librarian’ reference service. This is sometimes labelled Library 2.0 where “emphasis is placed on user-centered change and participation in the creation of content and community based services.” (Peltier-Davis, 2009)

One of the major components of web 2.0 in the library is the ability for users to ‘tag’ items in their catalogue. This allows users to expand on the metadata provided by the library and to create a network of resources. The positives of this mean that the metadata surrounding the catalogue is added to by the users of the information thus creating a richer set of data could make the item easier to find, an example of this is the ability to tag a book from the main library catalogue with a module code for a particular course thus allowing a different user to search for the module code and receive the information another user found useful for that course. Many university libraries are now implementing systems like this to augment the data surrounding their catalogues. This can also lead to book linking where a user who finds an information resource can then tag it so that it appears in the page for similar books that they have also found useful on a similar topic, for example if the user is researching Information Law finds a useful book and adds a correct tag this then widens the search results to include items that maybe aren’t solely focused on Information Law but perhaps have chapters. As mentioned in my blog some negatives with this and all Web 2.0 applications and interactions are that there can be inappropriate tagging, where items are tagged incorrectly or maliciously with a the lack of controlled vocabulary. In the traditional catalogue both are eliminated with the use of proper library metadata but in Web 2.0 they can be solved using either a variety of manual checking or automated checking and changing so that any words entered are changed to something within a controlled vocabulary, for example with a simple ‘did you mean’ function on the input page.

Another innovative way libraries can interact with their users via the Internet is by using a virtual librarian chat service. This is where when the user is on the library’s website and in particular searching the catalogue an instant messaging style box pops up and allows the user to ask any questions, much like they would when visiting the physical library, this adds value to a library website as it is a unique service that they can provide which could help persuade a user to use their services rather than relying on Google Scholar and Google Books.  In an era where libraries need to prove their worth over systems like those mentioned to both users and funders a virtual librarian could be a very important tool that utilizes the power of web 2.0 in a effective and accessible way.



Identifying appropriate and innovative methods of digital data representation and organisation and assessing their potential for use in the information sciences.

For this I am going to look at the use of a semantic web in the library setting. The Semantic web is a term coined by Tim Berners-Lee to describe “an extension of the current one (WWW), in which information is given a well defined meaning, better enabling computers and people to work in cooperation” (Cited by Rubin, R, 2010.) there are many different definitions of the Semantic Web from many different sources but this I believe is a simple and easy to understand explanation from the man who first envisioned it. The idea is to create richer relationships between information that is machine-readable rather then the human readable current WWW information. This allows for unique links to be made between pieces of information depending on new connections such as “works for, is author of, depends on” (Rubin, 2010) rather than the current simple method of having one piece of data linked to another being the only connection.

This new method of ascertaining links between information relies on the Resource Description Framework (RDF), “In RDF, a document makes assertions that particular things (people, Web pages or whatever) have properties (such as ‘is a sister of,’ is the author of’) with certain values (another person, another Web page)” (Berners-Lee et al. 2001. P40).  As described in my blog postings RDF’s are made up of triples containing a subject, an object and a predicate. These triples then form a web of data, with objects becoming subjects for further triples which create the web, which will contain many subjects and objects all interlinking in someway. From there we can develop a RDF schema which will describe the taxonomy for the RDF’s in whatever the domain of the schema is in. By using the Web Ontology Language (OWL) then the taxonomy and rules can show the links between information, for example if x is true then y must be true. Tim Berners-Lee describes this as the ‘Semantic Web Stack’.

The use of Semantic Web in libraries could come from the linking of the library catalogue to the web and the richness of information resources that it contains. Libraries already use a wealth of metadata in their catalogues and the library  worker understands the need for this so it would make sense that if any progress were to be made in implementing a Semantic Web then using these channels, along side computer science workers, with all their expertise in metadata and cataloguing should be the way forward, to begin with at least. In Karen Coyle’s 2010 paper Library Data in the Web World she talks about this and how “With Web based data, we can use the vast information resources there to enhance our data by creating relationships between Library data and Information resources. This will not only increase opportunities for users to discover the library and its resources, but will also increase the value of the data by allowing its use in a wide variety of contexts.” (Coyle 2010). The Dublin Core Metadata Initiative has been pivotal in crossing metadata skills with the Semantic Web and they have produced their own set of standards for metadata, which can also be implemented into a Semantic Web. 

The issues with implementing this are that it would/will take a huge effort by whoever decides to take on the task of creating the RDF’s and although in limited fields the benefits could be huge, medical research for example, the average user is happy with the current system of displaying information on the web and the investment needed does not out way the benefit gained at this moment in time.  There is also the issue of trust in the data being used that it is correct and does not mislead or contain false information, for example in the MMR controversy .  When coupled with he fact that information could also be marked with incorrect metadata all provide considerable hurdles to the Semantic Web working in any domain.


Utilising recent advances in information and communications technology to support the successful completion of a wide range of information related tasks with proficiency in an online digital environment.

For this I am going to look at mobile information and mobile devices, from the lectures on this topic and my blog post I believe there are many ways that this allows users to complete a wide range of information related tasks from anywhere in the world with nothing but a smart phone.

The advances in the last 3 years in mobile information technology have been vast. Since Apple announced the Iphone in 2007 the technological advances and possibilities in this area have grown exponentially. In these 3 short years apple have released 4 versions of the Iphone using the IOS, Google have released their own mobile OS in Andoid for use on a variety of phones, Windows have their own version of Windows 7 for smart phones as well as Blackberry and Nokia continuing their initial development of smart phone technologies.  The advances of these technologies have led to new ways of searching for and utilizing information, which have become ingrained into modern life. In a Forbes online blog in 2009 Ewalt claimed that Apple alone have sold 50 million IOS capable devices (Iphone and Ipod touch)(Ewalt, D,M., 4th Nov 2009). The way that these devices utilise recent advances in information and communication technology is by allowing, via WIFI and 3g networks, users full access to the Internet wherever they are, utilizing context awareness to add a richness of data to any information gathered and allowing the transfer of files via Bluetooth. 

I believe that the use of this in a library setting could take many forms from providing mobile versions of the library website and also a mobile version of their catalogue.  This would  allow customers to access the full library catalogue from their mobile device rather than using OPAC in the library or the full website on their device.  As was discussed in my blog full websites can run slowly on mobile devices due to high graphical content and the need to do large amount of scrolling.  Another way that academic or public libraries can utilise mobile devices and mobile information to allow users to satisfy some of their information needs is to provide an app for their organization.

As we discussed in the lab for this session and is mentioned in my blog this app could take many forms and provide many functions, it could include links to the mobile version of their catalogue, allow users to view their current items on loan and renew them, have a floor map of the library as well as a map that utilizes gps to show the route to the library from wherever they are.  Future possible applications for the app could also allow users to check out the material with their phone by using a barcode scanner which exploits the mobile devices camera, it could also use augmented reality to guide the customer around the library to where they need to be (if the current gps technology is improved or abandoned for the more accurate Galileo system or similar).

Further features of mobile devices could be to replicate the users library card by containing a RFID chip that could be scanned to allow access to the library and from there the single mobile device could provide the user with everything they need to gain access to the library: search the catalogue, tell where the items are in the stock; guide to the item; check the item out and renew it in the future. Of course some of these possibilities are closer than others and depend on the user having a smart phone and the technology being cross compatible across 4 or 5 OS for phones.  Also the Library would need to constantly update the mobile versions of the website and catalogue alongside the full versions.




References

Coyle, Karen,. 2010. Library Data in the Web World. Library Technology Reports, Feb2010, Vol. 46 Issue 2, p5-11, 7p. Available from http://ehis.ebscohost.com/eds/pdfviewer/pdfviewer?vid=2&hid=121&sid=bc989ed8-8c7c-4b61-830f-f600d48e16d0%40sessionmgr112 [Accessed 8th December]

Ewalt, D,M., 4th Nov 2009. Apple’s Shocking App Store Numbers

. Digital Download (Forbes). Available from:http://blogs.forbes.com/digitaldownload/2009/11/04/apples-shocking-app-store-numbers/ [Accessed December 8th]


Peltier-Davis, Cheryl,. 2009. Web 2.0, Library 2.0, Library User 2.0, Librarian 2.0: Innovative Services for Sustainable Libraries. Computers in Libraries, 29(10), 16-21. Available from http://0-web.ebscohost.com.wam.city.ac.uk/ehost/detail?vid=1&hid=113&sid=72761674-081e-429b-a03e-0ff1ceff362d%40sessionmgr114&bdata=JnNpdGU9ZWhvc3QtbGl2ZQ%3d%3d [Accessed 8th December 2010]


Rubin, R,E,. 2010. Foundations of library and information science, 3rd ed, New York, Neal-Schuman Publishers, INC.






DITA session 10

Information Architecture part 2

We looked at organisation systems for data and the Schemes, Structures and Labelling Systems used.

Schemes can be exact or ambiguous

Structures are the technology that support the schemes

Labelling systems give meaning to the schemes so that users know how they work

Exact systems are easier to make but are rarely used as the user has to know the exact piece of information they want to find. They are organised either Alphabetically, Chronologically, or Geographically.

Ambiguous searches are mush more common as the user will browse until they find what they need and can be; Topical or subject based
task orientated
or Audience specific

Labelling systems are the language used in the organisation scheme and should narrow the scope of the labels to specific audiences and they should be consistent.
Controlled vocabularies give meaning to the words and the relationships between the words should be defined (synonyms and antonyms ect.)

Then we looked at a search task where we had a picture of a vegetable but no idea of what it was and how we could find out using these search methods.

To do this we looked at Navigation and Searching styles.

The perfect catch (known item + exact)

Lobster trapping/Berry picking where there is an information need, a search takes place, and the information need is adjusted in regard to the search results and possibly a new search takes place.

Driftnetting where the user is randomly looking for things that do not relate to each other.

We then looked at how websites can help navigation by telling the user Where they are, whats related, where they have been and where to go next.



Finally we looked at the theory behind visual design and how important graphical design is on a website as first impressions count and how personalisation and customisation work on the net.

Personalisation is where the web site reads your cookies and history to try and provide information that would be use full to you but most often it is used in advertising.

Customisation is where the user alters the system to show the information they need.

DITA Session 9

Open Data and Information Architectures part 1.

We looked at Open Source software and how this is software released and developed for free for users to utilise and enhance. How most is released under the General Public Licence which means that you cannot use any part of the programme in a future programme that is sold for profit, all future incarnations must also be released under the GPL.

Open Data is a government and public body initiative to provide the data they collect for free on the Internet, this excludes the need for the costly and time expensive method of Freedom of Information act requests and also promotes transparency in government. there is also a push for this data to be released using the RDF system of the Semantic Web.

Open Data websites like data.gov.uk and data.gov (US) release their datasets as a searchable database that promote users to create applications that exploit the data and unlike open source these applications can use the data fro free but also charge users to use the applications. this leads to people creating open data mashups that provide information on many aspect of life such as local schools information and the historical instances of traffic build up.

Another valuable set of open data s the Ordinance Surveys data sets that could prove very valuable to programmers that can use the map information to create some very use full applications.

a downside of this is the possibility of the data being used for political means by being taken out of context.

Information Architecture part 1

We looked at the progress in the last 10 years on the Internet and how it was/is viewed using the Rosenfeld and Morville book as an example comparing the 2 editions.

We looked at the theory behind web design and how  a site needs to work well and look good to be successful. how documents and interlinking's should be like rooms and doors interlinking together.

DITA session 8

The Semantic Web and Web 3.0

In this session we looked at what the Semantic Web is and its relation to web 3.0. the differences between Web 1.0 (read), Web 2.0 (read/write) and Web 3.0 (read/write/execute).

The Semantic Web has been touted since the inception of the net by the W3C consortium led by Tim Berners-Lee and it aims to give richer meaning to information and make that information machine readable. This allows for information to become unambiguous.

We looked at how RDF triples are made up of a subject (a resource), an object (a property of the subject) and a predicate (the relationship between the two).

we looked at how the Dublin Core Metadata Initiative are implementing a set of rules for metadata for the predicates.  then we saw how the RDF triples can form webs of data by linking together using each others objects to become subjects.

then we looked at the taxonomies involved with RDF's that allow a schema to be produced. and how most taxonomies are hierarchical but not all are.

Then we took a look at OWL (Web Ontology Language) and how these set out the rules for the taxonomies and how they create relationships.

then the Semantic Web Stack was looked at and how it is made up of:
Web Resources
RDF (metadata)
RDFS (taxonomies)
OWL (ontology's)

The advantages of the Semantic web are that it allows for emergent behaviour where lots of facts + a few rules of inference = very surprisingly sophisticated results. But this only works in limited domains and there takes a huge input of effort for a very small output of data and there are also issues of trust in both the validity of the data put in and the metadata used.

Wednesday, 24 November 2010

DITA session 7

Mobile information

for this session we looked at the pros and cons of mobile information services IE context awareness for GPS locations and the limitation of screen and keyboard sizes.

Context awareness, GPS can provide web searches with local results provided by satellites pinpointing the location of the hardware to within 40 meters, most smart phones contain this capability and they also have compass and accelerometer capabilities so will know with direction you are facing which is useful for gaining directions via a piece of mapping software, it also allows you to geotag pictures and also access local information via wikipedia or alike.

how bluetooth can be used to send advertising to people discreetly and also the privacy issues with bluetooth.

the problems of limited screen size, how web sites provide mobile versions of their sites that allow easier navigation by a mobile devise and how servers can throw away information not needed by the user. it also can recognise the OS of the device and could in theory only send compatible information to the device but the technology isn't quite there for that yet.
how Mobile sites should be designed with the Mobile device in mid, keeping the need to scroll to a minimum, keeping graphics low, and all but basic navigation removed.

Keyboard size, the trade off with button numbers and size, the use of virtual keyboards on touchscreen devices how their are different keyboards for different tasks, the use of auto complete and the emergence of gesture control.

we then looked at what people actually use their mobile device for, mostly trivia and local information.

finally we looked at the combination of social media and location services and how this can add rich metadata to photos ect but also allow for serious privacy breaches.

during the lab session we discussed what a city student would want out of a mobile information app. we talked about using augmented reality to help guide someone around the campus, acess to the library catalouge, the inclusion of a social network to discuss lectures and recieve timetable changes.

DITA session 6

Web services and API's

we looked at the future of software as a service where the programme is not stored locally on a hard drive it is help on the net and the user accesses it via a web portal, only paying for what they use thus reducing the cost as you wouldn't need to buy a whole suite.

the possibilities of cloud computing, where all the users data is stored n the net with very little held on hard drives and thus leaves the home computer mearly as a window to the Internet to access all your files.

web services, we looked at the use of XML, the difference between a web page and a web service, how XML is not a language despite its name but is in fact a set of conventions to create a language that is similar to HTML, how XML documents consist of elements how each element contains other elements or text and can also have attributes and a document will have one root element.

API's hide the internal complexity of web services and allows programmers to with ease build on exisiting functionality. almost all programming is done via api's.

finally we looked at mashups which uses API's and web services to create new innovative systems, how no programming experience is needed how javascript can be used to manipulate web services and API's how many services publish code that can be used in HTML.

an example of my mash up can be found here

DITA session 5

This session focused on what web 2.0 is and the impact it has had on the way that the Internet is used.
We used the definition of that Web 1.0 was the read web and Web2.0 is the Read/Write web as used by the majority of users.

how it contains a rich user experience, encourages user participation, has dynamic content, uses meta data and promotes openness and freedom.

we looked at the limitations of HTML in terms of delivering rich user experiences and the promise of HTML 5 to provide this, how multiple users interacting can give a site purpose, the ability to tag items bringing together and adding metadata to more items without the need for a more formal library style metadata system but also how this can be abused, and how social interaction has lowered the amount of censorship and freedom of speech and how the narrative of a site will constrain the social constraints.

we then went on to look at Facebook and its impact on the web. how it is a non specific tool and you can do most social networking things on it, how the lack of avatars and handles gives a real online personality, its dependence on mutual agreements between 'friends', how your online activity becomes visible and commendable on, how it utilises a IM and email system, and finally we looked at the privacy issues contained within FaceBook.

Wikipedia is another form of web 2.0 we looked at and how anyone can contribute within the set editorial constraints, the pros and the cons of this method of gathering information, the wider contexts of wiki over a traditional encyclopedia, how it provides anonymity and how it can create a hive mind effect.

Blogs and how they are chronological pieces of short writing, their birth as diaries and how they have progressed to micro journalism and professional self promotion tools. how interlink blogosphers allow cross communication and the impact of micro blogging, ie twiter.

finally we looked at the negatives/criticisms of Web 2.0 how it can lead to buzz and hype,  its promotion of narcissism and amateurism and enhances the fickle nature of peoples personalities.