Thursday, November 13, 2008
Week 10 Comments
I commented on Jen’s “Intro to Information Technology” page:
https://www.blogger.com/comment.g?blogID=1475137707322366107&postID=7838913334025505790&page=1
Comment #2:
I commented on Megan’s “The Alley View” page:
https://www.blogger.com/comment.g?blogID=1139180432200060758&postID=5904996233988888078
Reading Response #10: Harvest Time
Typically, harvesting culls information from Web pages using the metadata tags embedded in the HTML code. Initially, there was some dispute as to whether the function should be performed digitally or manually, the concern being that people would create too many disparate terms while a purely software-based procedure would exclude important semantic relationships. With Dublin Core rapidly becoming a standard for metadata schema, Web pages are increasingly adhering to a similar standard and format. But does this improvement in search methods extend to the “Deep Web”?
The “Deep Web” includes information that is available to the public but resides outside the scope of traditional search engines because data is stored in proprietary databases, only accessible by direct inquiries. However, the OIA Protocol allows search engines without normal access to this information to index pages hosted on the “Deep Web” through OAI repositories. As a significant portion of digital information resides on the “Deeb Web” this new component of accessibility is important as it helps to promote open-source and transparent policies in regards to public information.
Muddiest Point #10
Saturday, November 8, 2008
Thursday, November 6, 2008
Week 9 Comments
I commented on Lauren’s blog “LIS 2600 Land”
https://www.blogger.com/comment.g?blogID=4181925387762663697&postID=3072857614832667163
Comment #2:
I commented on Theresa’s blog “Intro to Information Technology”
https://www.blogger.com/comment.g?blogID=5586031599791302355&postID=9132805377535596301
Reading Response #9: Gone Fishing
A bigger problem with this search method is that it skims over the larger repository of information available in the Deep Web. Most of the information here is digitally available but instead of being hosted on a “surface page”, it is embedded in proprietary databases that are linked to but function off of the Internet.
The Deep Web should be a primary concern for several reasons. Currently a great deal of development is being done on more semantic and comprehensive search capabilities. For this work to be functional and current, it has to be able to adapt to the exponential increase in digital information as well as its location, both on surface web pages and the Deep Web.
Also, the availability of information is one of the most important components of digital network systems because without it, the democratic intention of the web is meaningless. Bergman gives the example of several federal organizations that post their information online but not in a format accessible by commercial web engines; the majority of the information is hidden in the “Deep Web.” Though not intentionally deceptive, this unexplored territory of information could inadvertently become an intentional iron curtain. As the format of information transitions from analog to digital, it is important that the same amount of information be readily available.
Muddiest Point #9
Friday, October 31, 2008
Week 8 Comments
I posted a comment on Jacqui Taylor’s “Qui Quandaries” blog:
https://www.blogger.com/comment.g?blogID=2005895256228614061&postID=1597573534668681094
Comment #2:
I posted a comment on Sean Kilcoyne’s “spk” blog:
https://www.blogger.com/comment.g?blogID=1129785935180596689&postID=7878297980523559430
Reading Response #8
Another concern the reading raised was about uniformity. Currently, there is the struggle to create uniform metadata tags to generate more effective web searches. Similarly, semantic web research is trying to find away to incorporate the diversity of contexts but through a uniform metadata scheme so that information networks can create more efficient, streamlined queries. But if XML allows the creator to provide their own tag systems (although I am not familiar enough with XML to know if this affects search dynamics) this format could foster more user control but compound the problem of cataloguing information to make it more readily accessible.
Muddiest Point #8
Saturday, October 18, 2008
Tuesday, October 14, 2008
Week 7 Comments
I posted a comment on Sean’s blog, spk blog.
https://www.blogger.com/comment.g?blogID=1129785935180596689&postID=6165812584986651423
Comment #2
I posted a comment on Tamoul’s blog
https://www.blogger.com/comment.g?blogID=7114620464717775258&postID=3250005245059352985
Reading Response #7: Fair Isn't Always Equal
Ironically, one of the primary functions of libraries is to provide a readily accessible information format but technology and generational divides have created inconsistencies. CMS can alter that. Uniform templates can be created allowing for a modicum of flexibility to accommodate librarians from different disciplines. Most importantly, it creates in a single database with identical vocabulary which not only reduces storage capacity (by eliminating duplicates) but also creates a more familiar interface for users.
One thing that did strike me in the article was the question of using open-source software. GSU didn’t use it because it was deemed incompatible with their Windows systems. I think that it is important for public libraries to consider moving away from commercial products and adopting open-source software. Yes, there are constant upgrades but this is true with any type of software. Open-source reduces budgetary demands and can be specifically modified to adapt to individual libraries’ needs. This is not unfeasible as exemplified by the study. GSU had the money and resources to create an in-house database system. Time and money could have been saved using an open-source software.
Finally, CMS are important because libraries don’t have uniform technical training and, ultimately, interfaces most accommodate the user and provide the most efficient access to information.
Muddiest Point #7
Thursday, October 2, 2008
Assignment #4: How to Create a Flickr Badge
http://www.screencast.com/users/Little_Petunia/folders/Jing/media/84c6922c-3e1f-44f3-b225-d9cd8d517bb8
Photo Tutorial
Step 1:
http://www.flickr.com/photos/my_little_petunia/2907880597/in/set-72157607682662727/
Step 2:
http://www.flickr.com/photos/my_little_petunia/2908799998/
Step 3:
http://www.flickr.com/photos/my_little_petunia/2907885979/
Step 4:
http://www.flickr.com/photos/my_little_petunia/2908736748/in/set-72157607682662727/
Step 5:
http://www.flickr.com/photos/my_little_petunia/2907893353/in/set-72157607682662727/
Step 6:
http://www.flickr.com/photos/my_little_petunia/2908741206/in/set-72157607682662727/
Step 7:
http://www.flickr.com/photos/my_little_petunia/2907897891/in/set-72157607682662727/
Step 8:
http://www.flickr.com/photos/my_little_petunia/2908748016/in/set-72157607682662727/
Step 9:
http://www.flickr.com/photos/my_little_petunia/2908750468/in/set-72157607682662727/
Wednesday, October 1, 2008
Muddiest Point #6
Week 6 Comments
Comment #1 (on Lauren’s blog)
https://www.blogger.com/comment.g?blogID=4181925387762663697&postID=481855389631007759
Comment #2 (on Sean’s blog)
https://www.blogger.com/comment.g?blogID=1129785935180596689&postID=1090369698999357745
Tuesday, September 30, 2008
Reading Response # 6: It's Not Free and It's Not Fair
Tyson states that because of its design as interconnected networks within networks, the Internet isn’t owned; it’s a shared information resource. This creates a contradictory dynamic because it costs money to access it, whether through individual ISPs or companies and libraries paying flat fees to provide free access to their users.
I understand that I focus a great deal on the economics of technology but that is only because when heralding the benefits of the digital age, democratic and open source are ubiquitous descriptions. Whatever the original motivation behind its creation, the Internet is only as democratic as the society it functions in. For example, more prohibitive societies monitor websites, censor public information, and restrict access. We have a more democratic approach to the exchange of information but because we are a capitalist democracy, our Internet functions like one. It’s a shared network but privately owned companies make a profit from it by reformatting it into a paid service. It’s not as if there is a free point of access and ISPs are faster, more dynamic alternatives; they are the only alternatives. The same holds true with effective access to the glut of available information. Libraries access to networked information is only as good as the ILS they are able to afford. Not only does this widen the "digital divide" but quality become a privilege available only to those with enough money to buy it.
I’d really like to learn the economic history of the Internet, to understand how shared resources become a utility cost. I think learning about the dynamics of this transformation is important because the Internet, while no longer in its nascent stages, is still open to paradigm changes and could still become a democratic resource. Otherwise, we are only fooling ourselves if we believe that true democracy is a hand out waiting to be paid.
Saturday, September 27, 2008
Assignment #3: Zotero/CiteULike
The resources found through citeulike have the tag "from-citeulike"
The imported resources found through Zotero/Google Scholar have the tag "from-zotero"
Friday, September 26, 2008
Week 5 Comments
https://www.blogger.com/comment.g?blogID=5586031599791302355&postID=8265205753100140876
Comment#2:
https://www.blogger.com/comment.g?blogID=1129785935180596689&postID=7650461811986294684
Reading Response #5: 1984 or 2008?
Local Area Networks, or LANs, connect a small geographic area and have a higher rate of transfer but Campus Area Networks (CANs), Metropolitan Area Networks (MANs) and Wide Area Networks (WANs) are able to link together a larger physical radius, albeit at a slower transfer rate. Currently, libraries provide digital access to the Internet and proprietary online catalogues and databases. If the future of information in the digital age is networked information structures -- not just information accessible via the Internet -- how are they to be physically networked? Another concern is that although these reference resources are for public benefit, a significant portion of these documents originate in academic, collegiate, research and public institutions. Who would shoulder the brunt of financial responsibility and safety? If it is these underfunded institutions, that cost would add to the already sizeable tab that includes hardware, software, data migration, storage and digitization.
A viable option could be governmental or privately owned or underwritten organizations with a public interest but that still raises issues of the safety of transferred information, property rights management, and minority control over a service for the public majority.
Karen Coyle’s article “Management of RFID in Libraries” also addresses concerns of finance and security with technology. In the context of libraries, RFID products are capable of monitoring the location and status of an item. While the technology would improve patron satisfaction and reduce the amount of time spent tracking materials, the sheer number of tags necessary for even a small-sized library is substantial. Then there is the topic of privacy. RFID tags were originally designed for retail where items are purchased and are removed from inventory permanently. Libraries, on the other hand, would have a revolving clientele and inventory which means that there would be an extensive and permanent record of a patron’s account. Currently, barcode technology allows an item to be checked out but when it is returned, it is usually deleted from a patron’s account. Because RFID software originated in the retail industry, it would need to be adapted to a library but considering that it is still undergoing transformations and doesn’t offer a stable platform, libraries should be cautious. In fact, in regards to all technology, libraries must put patrons above the desire to keep up with the digital age.
Muddiest Point #5
Friday, September 19, 2008
Week 4 Comments
https://www.blogger.com/comment.g?blogID=954478916342085840&postID=8456835715519701143
Comment #2:
https://www.blogger.com/comment.g?blogID=1491308052360981630&postID=8211463560023690152
Reading Response #4: All Together Now!
As the article “Data Compression Basics” indicates, compression allows large amounts of information to be stored in smaller spaces. This is especially important for libraries because data storage is an important component of digital libraries and can help lower the budget costs for libraries. More importantly, it enables libraries to showcase information and become actively involved in a networked information service.
The articles “Imaging Pittsburgh” and “YouTube and Libraries” demonstrate that data compression allows for interactive media and exhibits that assist people not only with the functions of the library but provides access to educational and historical resources, normally limited by their analog format, and provides interoperability between different institutions. In this way, the multimedia fulfills the technical, social and access requirements of a functional network information infrastructure. Cooperation between academic and social libraries also helps create universal metadata definitions which is important to maintain a universal bibliography.
IMLS grants that finance pilot projects like the University of Pittsburgh’s Digital Research Library are crucial because they not only provide the budget for requisite technological advances that libraries couldn’t normally afford, but they also allow libraries to demonstrate their compatibility with modern digital infrastructures.
Muddiest Point #4
Sunday, September 14, 2008
Assignment #2: Flickr/Digitization
Thursday, September 11, 2008
Week 3 Comments
https://www.blogger.com/comment.g?blogID=7533952523781723717&postID=8784500798835554061
Comment #2:
https://www.blogger.com/comment.g?blogID=954478916342085840&postID=8926795329366757846
Reading Response #3: Who’s Going to Clean Up This Mess?
As I understand it from the three assigned articles, the information retrieval system of databases is using a metadata format to create a resource directory application for the Internet. Anne Gilliland’s “An Introduction to Meta Data: Pathways to Digital Information” is excellent in that it clearly outlines the three components of metadata (content, context, and structure.). However, her categorizations of different metadata types and functions seem to mimic the current tasks and functions inherent in traditional librarianship. Gilliland even admits that, “Cultural heritage and information professionals have been creating metadata for as long as they have been managing collections. Increasingly, such metadata are being incorporated into digital information systems.” And while she contends that “museum, archives and library professionals may be most familiar with the term in association with description or cataloging,” she overlooks that the fact that separate from metadata, librarians have been contextualizing, processing, and preserving a myriad of resources in a wide range of formats.
Often, librarianship is characterized as the struggling recipient of the technological conditions of information sciences but I think that it’s time for the potential contribution of librarians to the new digital dynamic to be recognized. In fact, aren’t most of these digital platforms trying to replicate what librarians do already? I wonder how many librarians, not information scientists but actual librarians, are involved in the development of these pilot projects and if they aren’t…they should be. Conversely, I am beginning to think that a higher level of technological training should be mandatory in library education as the convergence of the two fields seems inevitable.
Muddiest Point #3
Wednesday, September 3, 2008
Week 2 Comments
https://www.blogger.com/comment.g?blogID=7533952523781723717&postID=7521044379786850542&page=1
Comment #2
http://monicalovelis2600discussion.blogspot.com/2008/09/week-3-readings.html
Muddiest Point #2
Reading Response #2: Is Free Better?
Tuesday, September 2, 2008
Week 1 Comments
https://www.blogger.com/comment.g?blogID=7533952523781723717&postID=8990876746965593687
Comment #2 (posted on 8/28/08)
https://www.blogger.com/comment.g?blogID=7821109072135779287&postID=4962163656489671707
Thursday, August 28, 2008
Reading Response #1: At What Cost?
While it was interesting to learn about the physical hardware and design of computers, I was more fascinated by the economic component of manufacturing chips. The Wikipedia article on Moore's Law indicates that as the size of transistors decrease, the manufacturing cost per unit increases. I was also surprised to learn that computer manufacturing is reliant enough on the petroleum industry to be significantly affected by its market performance. So, considering the increasing expense, what is the motivation to keep developing faster processing units? If the end user is in mind, why create denser chips which, although faster, are more likely to malfunction? There is also the fact that software and hardware development are not growing at the same rate. Is this industry as independent and aggressive as it seems? By that I mean do environmental, social, economic, and efficiency issues in anyway affect the industry and, if so, how? Because of the depletion of natural resources, are alternative materials being developed for production? I also wonder how much capitalism and competitive influence decisions. I would love to read a study, if there is one, about the relationship between economics and consumer demands in the technology industry. It’s kind of like asking which came first, the chicken or the egg? Do they build it because they want it or will they want it if it is built? If someone knows of a good article on this, let me know.
Assignment #1 -- Death Knell for Public Libraries??
Statements about schools without books and chairs often sound like liberal cliches but I did, indeed, teach in a school where students had to sit on art tables because we didn't have enough chairs and they weren't allowed to take the books home because there weren't enough to go around. Our computer lab was small because we couldn't afford licensing for more than 20 computers and in a K-8 school, that was enough for each class to visit the computer lab once a week and even then, students had to take turns.
In fact, the only article that addressed the digital divide was the OCLC report: "Far from being young kids with little money in their pockets...the survey found that blog readers are older and richer than many people suppose."
Although the UNLV was successful in adopting a new format, it should be remembered that it was a university library and costs can be offset by tuition and supplemental grants. So even then, it is only readily available to people who can afford college tuition.
Public libraries serve in a different capacity, the majority of them working as smaller, satellite branches to serve individual communities. Patrons are usually those who can't afford their own computers or don't have access to modern research and information systems. So, where does the money come for computer updates, software licensing, and classes to teach people how to use the technology? Donations have dipped in the last several years and even then, most of that funding is directed to teen programming and adult literacy. Even with the increase of IMLS grants, funds are woefully inadequate to accommodate the long-term, continuous commitment necessary as demonstrated in Jason Vaughan's piece about the Lied Library.
I don't want to be a cynical educator and proclaim that it shouldn't be done because we can't afford it and leave it at that. I am all for closing the digital divide and giving underrepresented groups in rural and urban school systems the skills they need to compete. But since so many district funding systems are based on property tax, I would like to know how struggling schools will cope.
Technology is produced and introduced at a break-neck speed. Ten years ago, individual laptops in college were rare and now everyone has one. Cell phones were gigantic and unaffordable and now even 3rd graders are texting each other during recess but even this disbursement is throughout a certain economic class. Will there ever be a reconciliation of the digital divide, or will public institutions and certain economic classes always be at a disadvantage? What good is promoting the universal, public access that technology advocates if, in reality, not everyone has access to it?
Wednesday, August 27, 2008
Muddiest Point #1
Along with everyone else probably, I was a little confused about the date of submission for Assignment #1 and Reading #1 as well as the posting place (blog or discussion board). I'm sure this will become clearer in later weeks.