Journalism Evolution and Exploration

Standard

Today we had a guest speaker! The second half of the class featured a guest talk by Eliza Anyangwe (the Guardian Professional Networks). We had the opportunity to explore how journalism has evolved over the past decades as it has moved from a mainly paper medium to the internet and phone publishings. A journalists publishing options have moved from cubicles and sources to webpages, blogs, and a surprising amount of Twitter publishings as journalism has moved into the digital age.

One concern issued during the lecture is the problem of journalism values returning to the “Stone Age.” Journalism values have changed in the amount of information able to be published and how they are written. It is important for certain words to not be used or for it to not appear for an article to be supported by a particular company. Often journalists need approval to write about popular brands and new policies have been put in place as digital journalism has evolved.

Further in to the changing of what journalists write also comes a change in where and how they write, the where being just about anywhere that with laptops or tablets or even a phone and an internet connection. Meanwhile the how changes depending on the for of media the article is aimed towards. An atricle written for a blog may easily be written differently than an article written for a paper newspaper or website, while a Twitter post is even more vastly different.

It leads to an ever evolving medium, and creates an interesting scope of information and methods to be taught to journalism students in an evolving career.

Media Growth

Standard

On 6th February we spent time talking about various types of media and how the media now takes up such a large portion of our lives. First we discussed a wide range of media types, from hot and cool media, and the various methods media reach us, from magazines to television to books to the vastness of the internet. To begin with hot media is defined as engaging only one sense completely and demanding little interaction, while cool media is a low-def media that engages several senses less completely and demands more participation.

From this we were abel to explore how deeply embedded into our lives media has become. We have moved into a media heavy life where we roll over in bed in the morning, turn off the alarm on our phones and check out what happened on Facebook or Twitter or Email while we were sleeping. We see media plastered to the sides of buses and cabs, lining the trains and tube stations, in general an overwhelming amount of media that is so much more than the traditional newspapers and magazines with their paid ad sections or the easily ignored or skipped adverts that break up your nightly television viewing.

New Course, Old Blog

Standard

A blog that began in DITA will now continue with a new course in LAPIS, Libraries and Publishing in an Information Society.  The first session of the new course began with an overview of the term and an introduction in to the world of publishing in the current age.  Heading in to the final term we started looking in to publishing in the current information society.  As it is the start of a new term a change up Blog title, swapping it from a DITA title for a more general title.

Aside from that…  The definition of publishing has changed as the information age has progressed.  No longer is publishing only the relegated to books.  It has expanded to include a wide variety of products from literature to music to just about any type of information available.  This is just one topic of which the course will be expanding over the next ten weeks.  The new term has begun…

DITA Reflections

Standard

Despite the somewhat early hour of the class (no matter how hard I try, I am not a morning person) DITA was definitely one of my favorite classes this past semester.  While I wasn’t new to blogging, it was nice to get in to a regular schedule that prompted me to keep up a few of my blogs and to explore those written by others.  DITA also explored a wide range of the more digital aspects of the library world, from the emerging world of blogging to TAGs and archiving to an in depth exploration of multiple API’s.  The use of blogging helped to tie in the other topics discussed, linking them together, and creating an opportunity for discussions and further learning.  Blogging the various topics and assignments created the opportunity for a more thorough and in depth exploration of the topics, outside of the lecture or lab periods.

I still can’t believe that the course has already finished!  It’s a bit different for me, as I am used to 15 week semesters rather than the shorter 10 that I get to enjoy at CityLIS, and the time seems to have flown by.  I am eager to continue utilizing this blog (although it might need a name change now) as a means of professional development and learning, as it is a great way to work out new ideas and share or explore information.

Time for a little Holiday break and then on to our Spring classes!  Merry Christmas!

Old Bailey API

Standard

IMG_0093.PNG

The Old Bailey API offers the opportunity to delve into history, discovering how crimes where categorized and judgments awarded. The API offers a much more in depth way to research and discover the files within the Old Bailey API. The text analysis returns a variety of information to the searches. The API allows for a more critical search that the original search feature. However the original search feature returns a more user/reader friendly view of returned search parameters.

I did have trouble when attempting to export to Voyant Tools, as I continually got an error message when clicking the link. However I imagine that using a visualization tool to view the results in a different manner would have been fascinating.

My chosen Utrecht University digital humanities lab text mining research project was Annotated Books Online, a “virtual research environment for scholars and students interested and historical reading practices.” The search function is very basic when compared to the Old Bailey API, although Annotated Books Online holds a much smaller collection of information.

Overall, while these are two very different digital catalogs of information, the both serve a very historical purpose. Allowing us to reach back in time.

Data Sets and Word Maps

Standard
Data Sets and Word Maps

DITA this week centered around the opportunity for exploring different tools to “surf and stumble,” exploring datasets and the ways that we can break them down.  Three different web sites were utilized to complete this exercise; (http://www.wordle.net) (http://many-eyes.com) and (http://voyant-tools.org).  These websites allow you to input a set of information and then the sites will then analyze and develop that information into a variety of word maps and charts.  This provides us with a visual of what was originally a mere list of data.  You can see which words are most often used and which are used the least.

One of the tools, Voyant, also allows the individual to retract or cancel certain unimportant words such as: a, the, and, of….  These words are an example of often used words that aren’t going to pertain real information to present an accurate visual of the data set.  By retracting these simple words it is makes it easier to discover the meaningful and useful words that appear most often in the dataset outside of the use of if’s, and’s, or the’s.  The Voyant website was easily my favorite of the website, as it allows you to see the data reflected in other ways outside of the word map produced.  These include a summary and a words in the entire corpus section.  These allow for a more ‘wordy’ way of analysing the data beyond the visual word map.  For example, the words in the entire corpus section allows you to see which basic words are used the most and access what words need to be excluded from the word map in order to access the real data.

The various tools used for todays exercise introduce new methods of exploring data and analyzing the contect of a gathering of information.  This is useful in many ways, especailly when researching a particular topic as it allow one to quickly discover what the other important words or imformation are that also come up.  Through this we are able to become better researchers and organizers, editing information and following where a trail leads from one topic to another.

New Ways of Keeping Track

Standard

This week in DITA we looked at the ways of using the company (and program) Altmetics.  Altmetrics is a program implemented by a company of the same name that an individual can use to search for a topic and see how that topic is being discussed, from how often to how many to what platform (social media to the news.)  This provides users new ways of keeping track and collecting useful data.  It allows you to see which articles are the most popular and therefore the most current/useful/popular to your needs.  Or it could help you to find an unpopular article, something not being discussed and therefore something more unexpected.

This also allows people to keep track of news and information in a new way, rather than going to one or more individual websites to look for information, you can go to one website and have the multiple searching done for you in one place.  Altmetrics allows the user to see where the article is beinging discussed, how many people are discussing it, what is being said.  It provides real time tweets regarding the subject and links to the other sources where the topic is being discussed.

This type of technology, rather than cutting out the middle man, introduces a better, smarter middle man that does it a lot faster than without one.  Now, if only it were free like google…

#TAG (and other uses of the word.)

Standard

The first thing to pop to mind when presented with the word TAGS is, for some rather obvious reasons, hashtag.  It is, after all, rather common vocabulary at this point in time, usable not only on twitter, but other social media sites such as Facebook, Instagram, and Pinterest.  However, as we discovered in last weeks DITA class TAGS also refers to a program developed by Martin Hawksey called the TAGS application, which was designed to collect tweets.

By following some detailed instructions it became a rather simple thing to set up the TAGS application to collect data around a particular hashtag, for example #citylis. This data can then be viewed in a manner that allows us to see how the tweets are connected, from comments to retweets, as well as identify which tweeters use the # most often.  This allows us to view data in a more tangible way.  After all, scrolling through our twitter feed may allow us to view various tweets attached to the #citylis but it doesn’t allow you to see it all in one screen.  The TAGS application creates such an opportunity, to view a wide array of data all on one screen in a quantifiable way.

I found the TAGS application to be particularly interesting, and believe that it would be a very useful tool for companies, businesses, and libraries to identify important data that affects the business.  For example it would be very useful for a library to track #greatread or #bestseller, etc., to provide a view on what is being read.  It’s also useful in the way that we used it in class, for a business or group that has a specific #, such as #citylis, to measure the use of the # from the who to how often to how many people.   The TAGS application presents a new way for the individual to view the community surrounding a #!

API’s- the background workers

Standard

API’s are essentially what happens behind the scenes, the background works, the reason thing work the way they do.  We don’t really think about why copy and paste works, or how Facebook ads know that you like skittles, why putting a check mark next to Facebook and Twitter allows your new Instagram photo to show up there as well. We might appreciate the ways in which these little tricks happen, mainly because it makes it easier for us, but it doesn’t change how these things only happen because those websites made it possible with the use of an API.

“API, an abbreviation of application program interface, is a set of routines, protocols, and tools for building software applications. The API specifies how software components should interact and are used when programming graphical user interface (GUI) components.  A good API makes it easier to develop a program by providing all the building blocks. A programmer then puts the blocks together.” – API – application program interface

The use of an API allows the programmer of a website, such as Facebook or Amazon or even Google, to achieve a particular outcome that allows the user to access the website in an easier manner.  It can ease the difficulties of certain things, such as the person who came up with the brilliant API that allows a person to copy and paste! Can you imagine how tedious and annoying it would be if there was something we wanted to quote in an essay that was online and you had to go back and forth between windows typing it out word by word??  It would be terrible to say the least.  At the same time API’s also allow websites to create the opportunity to allow outside sources to take peaks at what we do, post, look at and then tailor the side of page ads to precicly what it is that that individual person would want.

So the ultimate question is, is there ever going to be some sort of goverence regarding what a programmer or website can do with a particular API?  The things that create short cuts and make options more available could simply be classified as improvements to the site.  However the API’s that allow outside sites to know that a skittles ad would be a great enticement for this person, which is of course included in the fine print when you first signed up, seem to push a bit of a line.  You want to stick some random advertisments on the side of my Facebook page to make money and support the site, sure, I understand that it’s pretty necessary.  But are tailored ads right? Should there be lines drawn regarding what someone can do with an API?

Information Retrevial & Databases

Standard

When retrieving information from a database there are two very important terms that come to mind: precision (the relevant retrieval) and recall (the portion of relevant documents.)  These terms refer to the outcome of a query run through a database, and the amount of data returned that is actually relevant to the query.  The accuracy of the query results will vary through a variety of databases, returning different results, some databases more accurate, or more suited to the query, than others.  Once a query has been determined it is then important to decide on the correct database to use.  Whether the answer is a wide google search or a more narrow academic search complete, choosing the correct database will determine the recall of the query.

This is an important fact that few seem to realize.  While producing an accurate query is incredibly important to the search recall, it is arguably more important that one chooses the correct database through which to run the query.  The more specific the query the more necessary it is to choose a more specific database in order to return the most relevant documents.  By being as selective about your database as you are about narrowing down your query the faster you will find the information relevant to your research or essay, cutting out time spent going through pages of results to find the same documents.