Thursday, November 13, 2008

Week 10 Comments

Comment #1:
I commented on Jen’s “Intro to Information Technology” page:
https://www.blogger.com/comment.g?blogID=1475137707322366107&postID=7838913334025505790&page=1

Comment #2:
I commented on Megan’s “The Alley View” page:
https://www.blogger.com/comment.g?blogID=1139180432200060758&postID=5904996233988888078

Reading Response #10: Harvest Time

Of the readings assigned this week, I found “Current Developments and Future Trends for the OAI Protocol for Metadata Harvesting” to be the most interesting as it peripherally touched upon the relationship between metadata harvesting and “The Deep Web.”

Typically, harvesting culls information from Web pages using the metadata tags embedded in the HTML code. Initially, there was some dispute as to whether the function should be performed digitally or manually, the concern being that people would create too many disparate terms while a purely software-based procedure would exclude important semantic relationships. With Dublin Core rapidly becoming a standard for metadata schema, Web pages are increasingly adhering to a similar standard and format. But does this improvement in search methods extend to the “Deep Web”?

The “Deep Web” includes information that is available to the public but resides outside the scope of traditional search engines because data is stored in proprietary databases, only accessible by direct inquiries. However, the OIA Protocol allows search engines without normal access to this information to index pages hosted on the “Deep Web” through OAI repositories. As a significant portion of digital information resides on the “Deeb Web” this new component of accessibility is important as it helps to promote open-source and transparent policies in regards to public information.

Muddiest Point #10

I was curious to why uploading my website to the Pitt's page didn't work the first time but it did the second. In the first instance, I followed Dr. He's instructions explicitly but I kept getting a 403 error message. The second time, I used instructions from the Technology Help desk. The only difference between the Help Desk's instructions and Dr. He's was that I typed in a telnet address in my browser. This address led to a request for a source of software to open it and I selected FileZilla. The FileZilla screen popped open and when I sent the file, it was visible on Pitt's page.

Saturday, November 8, 2008

Assignment #6

To view my completed web page, please click here.

Thursday, November 6, 2008

Week 9 Comments

Comment #1:
I commented on Lauren’s blog “LIS 2600 Land”
https://www.blogger.com/comment.g?blogID=4181925387762663697&postID=3072857614832667163

Comment #2:
I commented on Theresa’s blog “Intro to Information Technology”
https://www.blogger.com/comment.g?blogID=5586031599791302355&postID=9132805377535596301

Reading Response #9: Gone Fishing

Michael Bergman’s article “The Deep Web: Surfacing Hidden Value” is an important white paper because it addresses both the limitations of current search engine formats and the structure of information on the Web. Google, today’s most popular search engine, relies on an aggregate formula to create a list of query results intended to minimize duplicates and increase the number of relevant resources. However this format is inherently imperfect because it relies on a system of popularity through web citation (similar to the way the most prominently published scientific journal articles cite each other). As a result, a web page with relevant information might end up further down on a list of query results because it has not been cited adequately by other web pages.

A bigger problem with this search method is that it skims over the larger repository of information available in the Deep Web. Most of the information here is digitally available but instead of being hosted on a “surface page”, it is embedded in proprietary databases that are linked to but function off of the Internet.

The Deep Web should be a primary concern for several reasons. Currently a great deal of development is being done on more semantic and comprehensive search capabilities. For this work to be functional and current, it has to be able to adapt to the exponential increase in digital information as well as its location, both on surface web pages and the Deep Web.

Also, the availability of information is one of the most important components of digital network systems because without it, the democratic intention of the web is meaningless. Bergman gives the example of several federal organizations that post their information online but not in a format accessible by commercial web engines; the majority of the information is hidden in the “Deep Web.” Though not intentionally deceptive, this unexplored territory of information could inadvertently become an intentional iron curtain. As the format of information transitions from analog to digital, it is important that the same amount of information be readily available.

Muddiest Point #9

This week's lecture went over my head. I thought I had a basic understanding of HTML but realized I didn't when I wasn't able to discern the difference between HTML and XML.

Friday, October 31, 2008

Week 8 Comments

Comment #1:

I posted a comment on Jacqui Taylor’s “Qui Quandaries” blog:
https://www.blogger.com/comment.g?blogID=2005895256228614061&postID=1597573534668681094

Comment #2:

I posted a comment on Sean Kilcoyne’s “spk” blog:
https://www.blogger.com/comment.g?blogID=1129785935180596689&postID=7878297980523559430

Reading Response #8

I reread the literature on XML and I am failing to see the dramatic difference between XML and HTML except that the former provides a more guided experience for users although it doesn't utilize a standardized coding format. Also, XML does have more specific identifying parameters within the coding set but creators are able to create their own DTDs (Document Type Definition). Will this affect how the document is searched on the web and is this new format more compatible with Web 2.0?

Another concern the reading raised was about uniformity. Currently, there is the struggle to create uniform metadata tags to generate more effective web searches. Similarly, semantic web research is trying to find away to incorporate the diversity of contexts but through a uniform metadata scheme so that information networks can create more efficient, streamlined queries. But if XML allows the creator to provide their own tag systems (although I am not familiar enough with XML to know if this affects search dynamics) this format could foster more user control but compound the problem of cataloguing information to make it more readily accessible.

Muddiest Point #8

We covered HTML in class but I was wondering what is the difference between regular HTML and semantic HTML? Is Web 2.0 based on regular HTML or semantic HTML?

Saturday, October 18, 2008

Assignment #5

I created a virtual shelf with references on the art and films of Peter Greenaway.

Tuesday, October 14, 2008

Week 7 Comments

Comment #1:
I posted a comment on Sean’s blog, spk blog.
https://www.blogger.com/comment.g?blogID=1129785935180596689&postID=6165812584986651423

Comment #2
I posted a comment on Tamoul’s blog
https://www.blogger.com/comment.g?blogID=7114620464717775258&postID=3250005245059352985

Reading Response #7: Fair Isn't Always Equal

“Beyond HTML: Developing and Re-imagining Web Guides in a Content Management System” is a case study that delineates a university library undergoing the transition from independent web postings to a format streamlined by a CMS. One of the important lessons it demonstrates is that the digital divide doesn’t just affect users. Due to varying levels of expertise, different liaison web interfaces had radically different information and accessibility levels as well as duplicated information.

Ironically, one of the primary functions of libraries is to provide a readily accessible information format but technology and generational divides have created inconsistencies. CMS can alter that. Uniform templates can be created allowing for a modicum of flexibility to accommodate librarians from different disciplines. Most importantly, it creates in a single database with identical vocabulary which not only reduces storage capacity (by eliminating duplicates) but also creates a more familiar interface for users.

One thing that did strike me in the article was the question of using open-source software. GSU didn’t use it because it was deemed incompatible with their Windows systems. I think that it is important for public libraries to consider moving away from commercial products and adopting open-source software. Yes, there are constant upgrades but this is true with any type of software. Open-source reduces budgetary demands and can be specifically modified to adapt to individual libraries’ needs. This is not unfeasible as exemplified by the study. GSU had the money and resources to create an in-house database system. Time and money could have been saved using an open-source software.

Finally, CMS are important because libraries don’t have uniform technical training and, ultimately, interfaces most accommodate the user and provide the most efficient access to information.

Muddiest Point #7

I am understand that wireless internet is available through the distribution and receipt of radio waves but I don't understand how computers are able to block access to something as intangible as radio waves and how WEPs physically work.

Wednesday, October 1, 2008

Muddiest Point #6

I was wondering if there is a difference between Library Thing and Good Reads and if both sites work on the aggregate function the way that Google does. Does an aggregate dynamic changes how much information is available in a purely referential information system as opposed to a search engine?

Week 6 Comments

Week 6 Comments

Comment #1 (on Lauren’s blog)
https://www.blogger.com/comment.g?blogID=4181925387762663697&postID=481855389631007759

Comment #2 (on Sean’s blog)
https://www.blogger.com/comment.g?blogID=1129785935180596689&postID=1090369698999357745

Tuesday, September 30, 2008

Reading Response # 6: It's Not Free and It's Not Fair

This week’s two assigned articles were an interesting combination. Jeff Tyson’s piece outlines the history and physical infrastructure of the Internet while Andrew Pace’s “Dismantling Integrated Library Systems” chronicles libraries’ struggles with adopting and paying for ILS. Despite the loss of physical paraphernalia and brick and mortar institutions, there is nothing lightweight and accessible about the price of information.

Tyson states that because of its design as interconnected networks within networks, the Internet isn’t owned; it’s a shared information resource. This creates a contradictory dynamic because it costs money to access it, whether through individual ISPs or companies and libraries paying flat fees to provide free access to their users.

I understand that I focus a great deal on the economics of technology but that is only because when heralding the benefits of the digital age, democratic and open source are ubiquitous descriptions. Whatever the original motivation behind its creation, the Internet is only as democratic as the society it functions in. For example, more prohibitive societies monitor websites, censor public information, and restrict access. We have a more democratic approach to the exchange of information but because we are a capitalist democracy, our Internet functions like one. It’s a shared network but privately owned companies make a profit from it by reformatting it into a paid service. It’s not as if there is a free point of access and ISPs are faster, more dynamic alternatives; they are the only alternatives. The same holds true with effective access to the glut of available information. Libraries access to networked information is only as good as the ILS they are able to afford. Not only does this widen the "digital divide" but quality become a privilege available only to those with enough money to buy it.

I’d really like to learn the economic history of the Internet, to understand how shared resources become a utility cost. I think learning about the dynamics of this transformation is important because the Internet, while no longer in its nascent stages, is still open to paradigm changes and could still become a democratic resource. Otherwise, we are only fooling ourselves if we believe that true democracy is a hand out waiting to be paid.

Saturday, September 27, 2008

Assignment #3: Zotero/CiteULike

http://www.citeulike.org/user/rag55

The resources found through citeulike have the tag "from-citeulike"
The imported resources found through Zotero/Google Scholar have the tag "from-zotero"

Friday, September 26, 2008

Week 5 Comments

Comment #1:
https://www.blogger.com/comment.g?blogID=5586031599791302355&postID=8265205753100140876

Comment#2:
https://www.blogger.com/comment.g?blogID=1129785935180596689&postID=7650461811986294684

Reading Response #5: 1984 or 2008?

Though the three assigned articles address disparate types of technology, they all raise concerns about the economics and functionality of these technologies in modern libraries.

Local Area Networks, or LANs, connect a small geographic area and have a higher rate of transfer but Campus Area Networks (CANs), Metropolitan Area Networks (MANs) and Wide Area Networks (WANs) are able to link together a larger physical radius, albeit at a slower transfer rate. Currently, libraries provide digital access to the Internet and proprietary online catalogues and databases. If the future of information in the digital age is networked information structures -- not just information accessible via the Internet -- how are they to be physically networked? Another concern is that although these reference resources are for public benefit, a significant portion of these documents originate in academic, collegiate, research and public institutions. Who would shoulder the brunt of financial responsibility and safety? If it is these underfunded institutions, that cost would add to the already sizeable tab that includes hardware, software, data migration, storage and digitization.

A viable option could be governmental or privately owned or underwritten organizations with a public interest but that still raises issues of the safety of transferred information, property rights management, and minority control over a service for the public majority.

Karen Coyle’s article “Management of RFID in Libraries” also addresses concerns of finance and security with technology. In the context of libraries, RFID products are capable of monitoring the location and status of an item. While the technology would improve patron satisfaction and reduce the amount of time spent tracking materials, the sheer number of tags necessary for even a small-sized library is substantial. Then there is the topic of privacy. RFID tags were originally designed for retail where items are purchased and are removed from inventory permanently. Libraries, on the other hand, would have a revolving clientele and inventory which means that there would be an extensive and permanent record of a patron’s account. Currently, barcode technology allows an item to be checked out but when it is returned, it is usually deleted from a patron’s account. Because RFID software originated in the retail industry, it would need to be adapted to a library but considering that it is still undergoing transformations and doesn’t offer a stable platform, libraries should be cautious. In fact, in regards to all technology, libraries must put patrons above the desire to keep up with the digital age.

Muddiest Point #5

I've done a little reading on vector vs. raster digital images and I'm a little confused about which is the superior format. Does it depend on what you are using the image for? When does a vector image have an advantage over a raster image and vice versa? I am also wondering how easy it is to transfer between the two formats.

Friday, September 19, 2008

Week 4 Comments

Comment #1:
https://www.blogger.com/comment.g?blogID=954478916342085840&postID=8456835715519701143

Comment #2:
https://www.blogger.com/comment.g?blogID=1491308052360981630&postID=8211463560023690152

Reading Response #4: All Together Now!

An effective networked information infrastructure must not only be technologically advanced, but also socially functional and readily accessible. Libraries have the added obligation, as public institutions, of trying to create a dynamic and open system in the most economic way possible where information, not data, is the primary currency.

As the article “Data Compression Basics” indicates, compression allows large amounts of information to be stored in smaller spaces. This is especially important for libraries because data storage is an important component of digital libraries and can help lower the budget costs for libraries. More importantly, it enables libraries to showcase information and become actively involved in a networked information service.

The articles “Imaging Pittsburgh” and “YouTube and Libraries” demonstrate that data compression allows for interactive media and exhibits that assist people not only with the functions of the library but provides access to educational and historical resources, normally limited by their analog format, and provides interoperability between different institutions. In this way, the multimedia fulfills the technical, social and access requirements of a functional network information infrastructure. Cooperation between academic and social libraries also helps create universal metadata definitions which is important to maintain a universal bibliography.

IMLS grants that finance pilot projects like the University of Pittsburgh’s Digital Research Library are crucial because they not only provide the budget for requisite technological advances that libraries couldn’t normally afford, but they also allow libraries to demonstrate their compatibility with modern digital infrastructures.

Muddiest Point #4

I am a little confused about the difference between database software and database management systems, if there is a difference at all. For example, would Excel be considered database software, a database management system, or neither?

Sunday, September 14, 2008

Assignment #2: Flickr/Digitization

http://www.flickr.com/photos/30501339@N05/sets/72157607297496345/

Thursday, September 11, 2008

Week 3 Comments

Comment #1:
https://www.blogger.com/comment.g?blogID=7533952523781723717&postID=8784500798835554061

Comment #2:
https://www.blogger.com/comment.g?blogID=954478916342085840&postID=8926795329366757846

Reading Response #3: Who’s Going to Clean Up This Mess?

A well-known quote, attributed to A.J. Vendeland, states that, “Using the Internet today is like trying to use a library where all the books have been dumped on the floor and the lights turned out." Since the advent of the web and the new social dynamic of information exchange, many people are ready to categorize libraries and librarians as obsolete. What these same people fail to realize is that regardless of the significance, validity, or source, new information is flooding the Internet at an exponential rate. This confused proliferation of information is exacerbated by the fact that digital information is not limited to a physical location and a restrictive selection process. This ever-growing pool of digital resource is being tirelessly contributed to on a global scale.

As I understand it from the three assigned articles, the information retrieval system of databases is using a metadata format to create a resource directory application for the Internet. Anne Gilliland’s “An Introduction to Meta Data: Pathways to Digital Information” is excellent in that it clearly outlines the three components of metadata (content, context, and structure.). However, her categorizations of different metadata types and functions seem to mimic the current tasks and functions inherent in traditional librarianship. Gilliland even admits that, “Cultural heritage and information professionals have been creating metadata for as long as they have been managing collections. Increasingly, such metadata are being incorporated into digital information systems.” And while she contends that “museum, archives and library professionals may be most familiar with the term in association with description or cataloging,” she overlooks that the fact that separate from metadata, librarians have been contextualizing, processing, and preserving a myriad of resources in a wide range of formats.

Often, librarianship is characterized as the struggling recipient of the technological conditions of information sciences but I think that it’s time for the potential contribution of librarians to the new digital dynamic to be recognized. In fact, aren’t most of these digital platforms trying to replicate what librarians do already? I wonder how many librarians, not information scientists but actual librarians, are involved in the development of these pilot projects and if they aren’t…they should be. Conversely, I am beginning to think that a higher level of technological training should be mandatory in library education as the convergence of the two fields seems inevitable.

Muddiest Point #3

I looked at Evergreen's website to learn more about OSS but I am a little confused. In class, Dr. He mentioned the criteria of OSS: 1) free distribution; 2) source code available and accessible; 3) people are allowed to modify the code. The FAQ page on Evergreen's site states that the software is OSS but any end-user changes aren't added to the core code and that only specified users are allowed to change the core code. Is this still considered Open Source if there are conditional requirements about when code is adopted into the core or is this "core code" condition the same for all OSS platforms?

Wednesday, September 3, 2008

Week 2 Comments

Comment #1
https://www.blogger.com/comment.g?blogID=7533952523781723717&postID=7521044379786850542&page=1

Comment #2
http://monicalovelis2600discussion.blogspot.com/2008/09/week-3-readings.html

Muddiest Point #2

I am still a little confused about binary representation. I understand that 0s and 1s are used to represent physical data but why only 0s and 1s and not other numbers? And how does a computer interpret binary digits? I guess I am looking for a more physical alphabet that I can understand.

Reading Response #2: Is Free Better?

Like most consumers, I'm not so much concerned with how it works as how much it costs. The readings did help me understand how different software programs function, building on kernels to provide access for different operating systems, but to my untrained eye there wasn't a discernable difference between Linux, Microsoft, and Mac OS. What did stir my interest was the concept of Open Source software. Not only is it free but people are able to adapt the code?! What confuses me is that such a democratic software resource hasn't created more of a revolution for the mainstream consumer (like myself). Software programs such as Napster and Limewire have dramatically and permanently affected the way music is distributed and marketed. Why hasn't Linux created a similar dynamic? Is the very democratic nature of the Open Source medium (constant changes and updates) precluding mainstream distribution (as embedded software on PCs and laptops) or is it because there isn't a strong correlation between Linux' software programming and hardware development? For example, Apple manufactures popular hardware and peripheries (like the ubiquitous iPods and iPhones) that run on its proprietary operating system as Microsoft does PCs. Either way, I am definitely going to learn more about Linux' operating system because I am all for technology that not only attempts to close the "digital divide" but also provides users with an opportunity to actively participate in the development and adaptation of software directly for end users.

Tuesday, September 2, 2008

Week 1 Comments

Comment #1 (posted on 8/28/08)
https://www.blogger.com/comment.g?blogID=7533952523781723717&postID=8990876746965593687

Comment #2 (posted on 8/28/08)
https://www.blogger.com/comment.g?blogID=7821109072135779287&postID=4962163656489671707

Thursday, August 28, 2008

Reading Response #1: At What Cost?

While it was interesting to learn about the physical hardware and design of computers, I was more fascinated by the economic component of manufacturing chips. The Wikipedia article on Moore's Law indicates that as the size of transistors decrease, the manufacturing cost per unit increases. I was also surprised to learn that computer manufacturing is reliant enough on the petroleum industry to be significantly affected by its market performance. So, considering the increasing expense, what is the motivation to keep developing faster processing units? If the end user is in mind, why create denser chips which, although faster, are more likely to malfunction? There is also the fact that software and hardware development are not growing at the same rate. Is this industry as independent and aggressive as it seems? By that I mean do environmental, social, economic, and efficiency issues in anyway affect the industry and, if so, how? Because of the depletion of natural resources, are alternative materials being developed for production? I also wonder how much capitalism and competitive influence decisions. I would love to read a study, if there is one, about the relationship between economics and consumer demands in the technology industry. It’s kind of like asking which came first, the chicken or the egg? Do they build it because they want it or will they want it if it is built? If someone knows of a good article on this, let me know.

Assignment #1 -- Death Knell for Public Libraries??

All three articles touched on the necessity of libraries updating their format to keep pace with technological progress and the ubiquity of self-reliant users. Like political debaters, they didn't have trouble pointing out errors yet they didn't offer viable and applicable solutions for public libraries. In his paper, Clifford Lynch advocated that technology literacy should be taught as early as elementary school and that the teaching of traditional and technology literacy should be coordinated. I agree but he doesn't mention how this is to be done. Many rural and urban schools lack funds to install computer labs to accommodate such an extensive education. Budgets would need to include software licensing, modern technology, and the necessary updates.

Statements about schools without books and chairs often sound like liberal cliches but I did, indeed, teach in a school where students had to sit on art tables because we didn't have enough chairs and they weren't allowed to take the books home because there weren't enough to go around. Our computer lab was small because we couldn't afford licensing for more than 20 computers and in a K-8 school, that was enough for each class to visit the computer lab once a week and even then, students had to take turns.

In fact, the only article that addressed the digital divide was the OCLC report: "Far from being young kids with little money in their pockets...the survey found that blog readers are older and richer than many people suppose."

Although the UNLV was successful in adopting a new format, it should be remembered that it was a university library and costs can be offset by tuition and supplemental grants. So even then, it is only readily available to people who can afford college tuition.

Public libraries serve in a different capacity, the majority of them working as smaller, satellite branches to serve individual communities. Patrons are usually those who can't afford their own computers or don't have access to modern research and information systems. So, where does the money come for computer updates, software licensing, and classes to teach people how to use the technology? Donations have dipped in the last several years and even then, most of that funding is directed to teen programming and adult literacy. Even with the increase of IMLS grants, funds are woefully inadequate to accommodate the long-term, continuous commitment necessary as demonstrated in Jason Vaughan's piece about the Lied Library.

I don't want to be a cynical educator and proclaim that it shouldn't be done because we can't afford it and leave it at that. I am all for closing the digital divide and giving underrepresented groups in rural and urban school systems the skills they need to compete. But since so many district funding systems are based on property tax, I would like to know how struggling schools will cope.

Technology is produced and introduced at a break-neck speed. Ten years ago, individual laptops in college were rare and now everyone has one. Cell phones were gigantic and unaffordable and now even 3rd graders are texting each other during recess but even this disbursement is throughout a certain economic class. Will there ever be a reconciliation of the digital divide, or will public institutions and certain economic classes always be at a disadvantage? What good is promoting the universal, public access that technology advocates if, in reality, not everyone has access to it?

Wednesday, August 27, 2008

Muddiest Point #1

I was confused about one of the slides Dr. He discussed. It showed that the number of hours people spend accessing information hasn't changed dramatically . However, the slide didn't give a percentage of how much of that information is accessed digitally. I'm sure the digital numbers are higher than the analog but I was interested in the rate of change and how this affects librarians who work in public/academic institutions and are struggling to keep up with the digital divide.

Along with everyone else probably, I was a little confused about the date of submission for Assignment #1 and Reading #1 as well as the posting place (blog or discussion board). I'm sure this will become clearer in later weeks.

First Time for Everything

I am old enough to remember when the Internet wasn't ubiquitous, chat rooms were the waves of the future, and everything was saved on a floppy disk. Academia has forced me into the future and I fear for the safety of my computer. Yesterday, I started screaming at a Precor treadmill b/c I couldn't figure out all the bells and whistles. So, if you happen to be in the IS building and you see a laptop sailing out the window...well...it's probably me.