Categories
ARTstor

ARTstor to add VAGA Artists to Contemporary Art Collection

ARTstor announces the addition of the following images to the collection. For assistance with ARTstor, please contact the VRC.

Modern and contemporary art from VAGA member artists
The Visual Artists and Galleries Association (VAGA) and ARTstor have reached an agreement through which approximately 4,000 images by VAGA member artists are now available to ARTstor users. More images by VAGA artists will be made available as additional collections of modern and contemporary art are added to ARTstor.

Manuscripts and early printed books from the Bodleian Library
ARTstor is pleased to announce the first launch of over 4,000 high quality images of manuscripts and early printed books from the Bodleian Library at Oxford University.

Architectural history of Venice, Italy
ARTstor has added approximately 200 photographs from Sarah Quill’s unique photographic archive depicting the buildings and civil live of Venice.

By mmacken

twitter: meganmacken

Director, Visual Resources Center and Digital Media Archive, Division of the Humanities, The University of Chicago.

My academic background ranges from classics and comparative literature to modern art and architectural history, and so, naturally, I am a librarian. I have graduate degrees in art history and library science, manage digital image and audio collections for the Division of the Humanities, and am always eager to collaborate across disciplines, universities, and even continents! I'm interested in exploring the library's role in Digital Humanities, not just as an archive for born-digital objects but as a locus for Digital Humanities centers. At THATCamp I'm excited to find out how others are visualizing data, especially to facilitate creative research and teaching in art and architectural history and film studies. How can visual data (still images, film, 3D models, etc) move beyond illustration and become a source for research? What kind of creative information retrieval interfaces do we need to do this? We've got metadata...let's make it work!