>> From the Library of Congress in Washington, DC. ^M00:00:04 ^M00:00:19 >> Beacher Wiggins: I'm Beacher Wiggins, director for acquisitions and bibliographic access. And we want to welcome you to another in the series of library services LCs digital future and you that has been and continues to be coordinated by Angela Kinney and Judith Cannon. And this particular one they had to expend extra energy because I confused them with my timing and my availability. So I express extra thanks for today's session. Today we want to update you on where we are with BIBFRAME. We won't have formal introductions, we'll have each of us introduce ourselves as we speak and there will be four of us by the end of the session. I will be the briefest of all, you will be pleased to hear. ^M00:01:23 [ Phone Ringing ] ^M00:01:26 [ Laughter ] ^M00:01:28 Excuse me. Is this being recorded? BIBFRAME has been underway now, the BIBFRAME pilot and pilot 2 has been underway now for almost a year. And we've been, the staff who've been involved have been using the BIBFRAME 2.0 model. We launched this pilot to test the library's approach to linked open data using the RDF resource description format structure. We are at a point now where we are assessing the pilot based on the work of the 60 or so staff members who have been participating in this for the past 11 or so months. These staff have been cataloguing their materials that they received on a regular basis daily using BIBFRAME. And in most instances, they are also continuing to catalog those materials using the MARC format. So there's been a lot of contribution on the part of participants who have taken part in this. Before we depart next week for the American Library Association annual conference in New Orleans we will issue a report of what we've learned during the pilot based on the input, the feedback and the output of the participants. Staff in the network development and MARC standards office net dev and in COIN are frantically now putting together the final touches on the report for my review. Based on our report and assessment we also want to be able to make some definitive statements about LC's commitment to BIBFRAME. Today Nate Trail, Jodi Williamschen, Les Hawkins and I will give you our LC colleagues an update before we do similar briefings and updates for our external colleagues in New Orleans. I'll speak briefly about the goals of the pilot. These goals form the framework of the pilot assessment and the report that will be issued shortly that you'll all be seeing. Nate will address the BIBFRAME database, the conversion process, and related points. Les and Jodi will focus on the BIBFRAME editor and the changes made to it for the pilot. And improvements that were made during the pilot based on the feedback from the participants engaged in it. ^M00:04:09 ^M00:04:13 The goals were set from three perspectives, mine as the director, net dev's as the technical lead in the project, and COIN as the training lead. From my perspective as director I want us to be to determine that BIBFRAME can be applied to a large participant pool, i.e. that BIBFRAME will be scalable and could be adopted by institutions of any size. If the Library of Congress can do it anyone should be able to. I want to determine that LC will indeed pursue BIBFRAME as the data pathway and the ultimate replacement for the MARC format. And lastly, I want to be able to announce the decision to the library and the vendor community because until the library is able to make a firm commitment we won't get the buy-in and the system development that will be needed to make BIBFRAME a viable library community format structure. ^M00:05:23 ^M00:05:27 To reach such a decision point much work was required in the technical arena and the associated training sphere. From the net dev perspective what was desired was a realistic cataloging environment that allow catalogers to create bibliographic resource and authority descriptions in BIBFRAME just as they are able to do in the MARC environment. To achieve this the database conversion was required, a database conversion was required, i.e. MARC to BIBFRAME. ^M00:05:58 ^M00:06:03 And you'll hear a bit more about that from Nate. There are many steps involved in making this a reality. The pilot assessment that I talked about will cover this and show how readily we were able to address the various issues and concerns that were raised. Associated with the converted database was a need for an input tool, the BIBFRAME editor that will enable catalogers to handily and easily input data for the BIBFRAME descriptions that they were creating. Much of the editor improvements reflected the COIN perspective that is from both the teaching perspective and vantage point, as well as the ongoing interaction with the pilot participants and the improvements that were made along the way to the editor. For pilot 2 a lot of energy and effort went into enhancing the editor and we hope we'll have a better tool as we move forward. And Les and Jodi will talk more about that shortly. So now I ask you to stay tuned for the report and assessment that will come out by next week. And now we will hear Nate talk about the database and some of the work associated with that. ^M00:07:31 ^M00:07:44 >> Nate Trail: Good morning everyone, I'm glad to see that some of us are not [inaudible] or are we more nerdy than we care to admit. So yeah, I'm going to talk about the BIBFRAME database and conversions between MARC and BIBFRAME, and some other things. So this is an overview of the data flow from MARC to BIBFRAME. First of all, we take all of our name title and title authority records out of ID and we do a conversion from MARC authorities to BIBFRAME, convert those to works. And then we convert all the MARC records and try to match against those name title authority works and store them in the database as well. And then the BIBFRAME description circle represents where the editor is that you can natively create a brand-new BIBFRAME description or call something up out of the database, edit it and save it back. And during all that time there are links between and among the BIBFRAME database itself and IDloc.gov. ^M00:08:52 ^M00:08:58 So there's about 1.2 million title and name title authorities that we converted to BIB, about 17 million BIB records, and that includes some stuff that we don't distribute outside. But we wanted to try to create a catalog just like the Voyager catalog system has so that when somebody is doing descriptions they can call up whatever is available inside the catalog. We have daily feeds coming from both ID and the ILS. And since people are doing duplicate entry we have to be able to block things where the cataloger created it natively in BIBFRAME, and we don't want the record from the ILS to come back through. So we're doing a merging that takes a converted record and matches it to existing BIB records. And we have also processes that allow us to validate between when a BIBFRAME description is natively created and converted to RDF, and converted to JSON or converted back to XML we want to be able to validate that that is still a good structure. ^M00:10:05 So there's a lot of different processes and workflows that we have to maintain and support, and places where things can break all the way through. So this is a picture of our cataloging homepage, it's not the BIBFRAME database homepage. It's not particularly meant for outside users but it can be. We're focusing on allowing the cataloger to do their work, and we're also focusing on being able to count things for. ^M00:10:33 ^M00:10:41 When Beecher [assumed spelling] wants to know how many of one thing or how many of another. So we want to be able to support the workflows, but we also want to do statistical analysis on you know what did we get right, what did we get wrong, how can we clean up a certain batch of records. So there's all kinds of different ways of cutting the data once it's in the BIBFRAME. ^M00:10:59 ^M00:11:15 Okay so there's keyword searching, there's title searching, name and subject searching. There are facets once you get in by language and LC class. There are all kinds of filtering by work or instance. There's is left anchor browsing. We'll do some of those. So this is a title search for Babes in Toyland and you can see that when you look down the left these refinements are different facets. So for Babes in Toyland there's a whole bunch of notated music, some movies, some audio, etcetera. ^M00:11:48 ^M00:11:56 And let's see for left anchor browsing we can try an imprint. ^M00:12:01 ^M00:12:09 So these are all the imprints of Harcourt [assumed spelling] and you can see that on the title page of many of these things it's expressed in a bunch of different ways. They're probably all the same publisher but right now from the MARC record this is what we have. So as we go through these processes we're probably going to go behind the scenes and say let's see how many of these things we can say are the same publisher or from the same publishing agency and link them together better. But at least this way we have them in a line and you can do quick analysis of these things. ^M00:12:41 ^M00:12:46 So the editor is tied tightly to the database and it allows us to take up any description for editing as long as it's within a particular defined profile. And the reason for profiles is that BIBFRAME is a very large descriptive container set, but say notated music has a pretty defined set of things that you want to use to describe. So the profile allows us to narrow down what we want the cataloger to be able to do and be required to do. So you can call up the record and edit its description or you can add an instance or an item. You can decide that this particular thing that you're working on is linked to another description in the database. And you can link it to name authority records so that we're not just entering strings anymore we're actually linking to a defined name authority or subject LSH authority, etcetera. And then you can store it back in the database and it looks just like all the other descriptions that we already have. But there are a number of issues that we've discovered as we go along. We don't have a profile for every type of description that there is. And when we do a conversion of a record from MARC there isn't really an exact one-to-one match that tells us that this previously MARC record should belong to this BIBFRAME profile. So when you call up a record we have to make a decision of which profile we're going to put it into. And when you do that you make that default decision there are RDF description elements that are not available to be edited then in that BIBFRAME profile. So what do we do with that information? So far, we have said we're going to keep it and post it back but we're not going to allow you to edit it which is a problem for people who want to be able to actually edit the whole thing. So the editor itself has got some things that we need to work through but we're discovering these things. There's a difference between an IBC update and a clone even though it looks like you're doing the same thing. So if you see an initial bibliographic control record in the database and you want to call it up and edit it and save it back you're basically trying to save the same set of descriptions back, a work instance and item. But if you want to clone something you want to call up that same record but save it under a different identifier. So we've had to find ways to say clone this thing, give it a new identifier, probably the LCCN and then save it back completely parallel but having basically the same metadata except for a new title or something like that. And catalogers keep coming up with new ways of showing us that we haven't gotten it done yet, so we keep doing new profiles. ^M00:15:44 ^M00:15:48 So between the database and the editor then when you call up a description we have to do a little bit of wrangling because we store the data in XML but the editor wants to see JSON. So our RDF has got to be expressible in a variety of ways. So in the database on any given record you'll see that you can convert it to all those different things. And I talked little bit about the profiles being one of the mechanisms that are causing us problems even though they let us have a lot of advantages as well. There is also a distinction between how something is created natively in BIBFRAME versus how it was converted from a MARC record. So you can, if you are doing a mass conversion you make decisions about what happens to the [inaudible], etcetera. But when somebody is natively creating it in BIBFRAME you can actually know their intent a lot better. So BIBFRAME native descriptions are in some ways better but they need to be equivalent so that they live side-by-side in the database. So there's about five different ways that you can express a source based on whether it was created natively or not. But we have to find ways to make it all look the same so that when you push a button to query it it's queriable [phonetic] identically. ^M00:17:10 ^M00:17:15 So when we do the merging what we're doing is we're trying to match basically on the name and the title. If we find a match when something is converted we'll take the work and instance that's created, we'll take the subjects and the classification data and put it onto the work that we found in the database. And we'll discard the work and attach that instance to that work. So this has been good in some ways but it's also been problematic because some of the times when you have a name title that matches you really want to do an RDA expression work instead of doing a merge and saying this is an instance of that other work in the database. So we're going to have to go back through the database and look for places where the thing should not be a merge onto another instance -- onto another work but should actually be a work that has an expression-like relationship to the work that was found in the database. So also, on bibliographic records there's 7XX related titles and those we need to be able to -- oh no I'm sorry, 7XX names. So somebody who's an illustrator of a work when we do a merge and we drop the work the illustrator of something also goes away. Because the original work does not have an illustrator but this particular edition does. So the merge in that case saying that this is an instance of that may be true but we've lost some key information about that. And the illustrator should not be merged onto the original work. So we really need to have a work in between that says this is a work in its own right with an illustrator but it's linked to that original work. And sometimes of course there are no title or untitled things so we don't want to merge those. So the merge specs it's pretty detailed here, I'll just let you read it it's not that interesting. Here's an instance where something worked quite well. And earlier today that image wasn't working so I'm glad to see the cover art thing got put back in. So this BIB ID 203786 was merged with this name authority work Huckleberry Finn. And when you look at that Huckleberry Finn you can see that it's related to a whole bunch of other stuff. ^M00:20:00 So in BIBFRAME we say that something is related to something else and then using the sparkle query language were able to say that a particular work has a relationship to something else. But then you can also say what else is related to that other thing, and you can bring things together in much more interesting ways. ^M00:20:21 ^M00:20:31 Okay so this is an instance where it worked but you know probably the -- no, that didn't work. Let's go back to it. ^M00:20:42 ^M00:20:48 Sorry. Try again. So Concerto selections is probably not an actual work it's just a colocation mechanism but it did bring these things altogether. I would say that all of these instances probably have, should be works in their own right and be somehow related to this overall collection work. So in a future pass we'll straighten that out. ^M00:21:14 ^M00:21:20 All right, so when we do this conversion we've noticed that the name titles that were described in ID have lots of extra information like the media and the form, etcetera that are disambiguated from previous works. And so what we're going to do now is we're going to take a work that has some of these extra subheadings, chop off that subheading and say all right can I find a name title match with everything except for this little part. And if I do make that match I'm going to make a relationship between those things. And all of these things that I've highlighted in red are candidates for such linking. Bibliographic records have the same thing, when they have a 130 with lots of subparts we should be able to chop off the subparts and say there is some relationship between everything except for that last node, and so we're trying to do that as well. And here's what I was talking about 7XX related titles. Those things we should be able to say formulate the name and title and see if you can find a match in the database, and make a direct link between those things. So here's one where it actually did work, a vocal score. So this is a Rossini vocal score and the related work does not have vocal score on it. ^M00:22:44 ^M00:22:56 Okay here's a translation, Ultimate Ironman. So this one is in Greek and it linked to the name title authority work which has no language on it. But then it turns out that because we're asking questions of it there's also a Croatian translation. And I haven't yet done it but the Croatian translation really should be a sibling of the first one that I started with so that on that old work you should be able to say and what other languages has this been translated into without having to go up to the higher level. So there are lots of things down the right-hand side that we've just started exploiting the nature of RDF in creating these links. ^M00:23:41 ^M00:23:45 Okay 130K selections. So these poems are related to each other and apparently translated into Romanian as well. But here are some of the actual poems and not just the overall selection work. Okay and the bibliographic record with 7XX-related titles. I've only been doing this on an experimental basis so the whole database has not been completely converted to having all these links. But this poem I guess or song is related to a lot of other things. ^M00:24:27 [ Laughter ] ^M00:24:34 okay so still to come, I started this presentation, I gave it in California in April and I had a slide called still to come, and it had all these things on it. And I realized when I opened it up that we've already gotten started on most of these things, so I crossed them off here. We have exported all the RDF works and instances and stored them at ID for people who are interested in the big libraries to be able to download and ingest into their own databases and do something with. We are linking related titles MARC 7XX. The name title authority records on a daily basis, those things are linking to each other. We haven't gone back and done the whole thing yet. And we're starting to experiment with linking instead of merging on things that should be RDA expression works. So we're still looking for more ways to link and we're thinking about ingesting different types of data including zip and Onyx records, although some of those are already in the database. So getting the data upstream a little bit may not be that useful but we'll see. Casellini, the Italian cataloging partner that we have, has a conversion of their own and they've converted all of our database and gave it back to us. And they can give us weekly conversions of different records if we want it. So we're doing an experiment with how do we look at that data, convert it to whatever it needs to be for us, and ingest it. And that's all I have, thank you. ^M00:26:05 [ Applause ] ^F00:26:12 ^M00:26:26 >> Jodi Williamschen: So I'm Jodi Williamschen, I joined the library a year ago from Oakland, California where I worked for Innovative Interfaces. I'm very happy for the Capitals, I'm way happier for the Golden State Warriors. ^M00:26:36 ^M00:26:46 And Les and I are going to talk about using the BIBFRAME editor. So I'll start out with just a quick overview of selected portions of the editor starting with there's the URI or URL if you wanted to go explore it yourself. And we've set up a lot of editor profiles that are based on the materials cataloged in the library, as Nate said we haven't covered everything yet but we're trying. One of the ones that we added during the pilot is the rare materials profile, and it has some special elements that the rare materials cataloger who's in the pilot requested. This is how the editor profile for a monograph work looks, there's just a lot of fields that can be filled. And a lot of them underneath that are tied to controlled vocabularies so there's a lot of type-aheads and lookups so that you don't have to remember to type everything in from scratch. This is an example of one of the type-aheads. So for form genre it's tied to the Library of Congress genre form thesaurus so you can just start typing. And then you can select the term that you want, highlight it, and it goes into the editor. And behind the scenes the URI that's associated with that term that's in id.loc.gov is assigned to the field, and then you get a nice true triple on the backend with URIs. And also, I've been experimenting with different ways to put subject headings in where you can search each component of a complex subject heading separately and get a URI for each component. And they are all stored together in the output. Where possible we've tried to put a lot of standard values in. So the language lookup is searching the MARC code list for languages. And we've added in other specialized terms that are part of the RDA registry. We've put them into ID just to keep all the terminology in-house but I imagine some day we would be directly linking out to the RDA registry for these types of terms and probably more. In each profile we've also tried to customize standard values where possible. So there's a lot of the formats have a media type of unmediated and a carrier type of volume. But then we also have video and video disk for the DVD profile. Audio and audio disk for all of the CD profiles. And projected and [inaudible] for the 35-millimeter profile. And we've also customized what fields are included in each profile. So these are the identifier options for four different profiles based on the number, the identifiers that are available for each format. So the monograph profile has the fewest and the music and sound recording profiles have many more. So the advantage of this is that if you're only cataloging monographs you don't have to sift through a lot of fields that don't apply to your format. But if you are cataloging in a specific format you have the fields that you need. ^M00:29:57 ^M00:30:00 And one thing that we just started a couple of weeks ago is put in the perform music ontology terms for medium of performance. As prat of the LD for P [phonetic] linked data project that Stanford was coordinating through a Mellon grant, there was a group of music catalogers that worked on a specialized vocabulary for music. And their terms for medium of performance are much more detailed than what was developed here for medium of performance. So we've been working with how to incorporate their vocabulary and their prefix in the ontology. And so this is the output where you can see that BF is used as the prefix for the BIBFRAME terms but PML for performed music ontology is used for their terms. Their ontology for music in this area is very, very detailed. I think we've kind of gotten it to work and I'm eager to hear what the music catalogers have for feedback after they've experimented with it a bit more. So when you're using the BIBFRAME there are a lot of different ways that you can catalog the material that you have. So the first one would be that you have a brand-new work and a brand-new instance and so you start with creating the work. This is how a lot of the cartographic catalogers are doing their mapwork. So you can go to the editor workspace, select cartographic and choose to start with the work profile. And here's an example of all of the work level information being added in. During the pilot the map catalogers have been really outstanding to work with because they really looked through what the first profile was that we gave them and suggested a lot of things that would make it better for them. And then here's their instance level information it's a bit more detailed, there's more fields. And finally, there's two local fields. Admin metadata is sort of all of the data only a cataloger could love. A lot of it comes from the leader and the 042 and 040 of the MARC record. And it's stored in a separate block. And then there's the item record block which we have not really done a lot of work with yet, it's fairly generic and not really Voyager specific at all. And I think as we go forward we're going to have to make some decisions about how to treat the items in BIBFRAME because if we want this to truly be a replacement for the Voyager ILS are we going to have to put in Voyager specific fields or should be looking towards the next gen ILS and what it needs. And these are all decisions to come. And after you input all of this information then you can preview it to see how it looks in different RDF serializations. I believe during the first pilot this was like the last proofreading check before you were done with the record. Now it's just a cute way to [inaudible] the date in a different way. And then you click on the post button and back in the editor workspace you have this highlighted area where it says that the description has been submitted for posting into the database that Nate was just talking about. And you can click on that LCCN and ta-da there it is in the BIBFRAME database. This we got working earlier this year and it's been really fun to watch the records go in and out. And then there's also sort of this moment of panic when you hit the post button and you're like please work, please work. Another workflow is to add a new instance to an existing work. This happens a lot with DVDs of television shows. So we had the Blu-ray DVD profile and you select the instance first. And then you can either search in the BIBFRAME database to find the work that you need or you can search within the BIBFRAME editor but you really only get the name of the work. If you want to be really, really sure you want to search first, searching in the database because then you have access to the full suite of data to verify. And then after you link to the existing work you don't really need to add anything else to the work so then you put in all of your instance level data. The DVD profile is one of the longest ones that we've created and we've tried to add as much standard dropdown lists as possible so that the cataloger doesn't have to keep typing in the same bits of data over and over. And now Les is going to talk about IBC records because this is one of our big recent accomplishments. ^M00:34:35 ^M00:34:48 >> Les Hawkins: Hello, I'm Les Hawkins, I work in the cooperative and instructional programs division. And I am excited about demonstrating this but it's not one of my accomplishments, it's really our colleagues in net dev who've done this with the searching and the interfaces for uploading IBCs. The other thing about being able to work with IBCs is that catalogers in the BIBFRAME 2.0 pilot told us that this is very important to be able to do because IBC work is very, there's a lot of different types of records that are coded IBC. There's a lot of different workflows associated with it and it's a huge part of LC's cataloging workflow. So I'm very excited that this, this is still in development from what I understand. I've mentioned a few of the categories of IBC records that catalogers need to work with. Vendor records, E-CIP record, ISSN prepublication records, and I say and others but that leaves out copy cataloging, a huge part of the workflow. Nate mentioned cloning IBC records to like build another edition for an existing record. So there's just a lot of work that, this is a large workflow for LC and I'm very happy that this is being developed. I have an example here and I want to say that I began, I just found this in the Voyager database and I applied it to the BIBFRAME database. But I started out with a search in the BIBFRAME database, I pulled up the record. And I'm showing here that we have a link that we can paste in to the upload into the editor, the BIBFRAME editor. So that's what I've done here, I've copied the link, the instance link here. ^M00:36:45 ^M00:36:49 A new feature this last week was added by our colleagues in net dev and that is the ability to choose a profile, choose the appropriate profile. Before this feature was added they just, all of these IBCs just loaded into the monographic profile. And now if you have, this is an example of a rare book being loaded into the rare materials profile. If you have a serial IBC, a prepublication, IBC that you're working with it can be loaded into the serial profile. The thing about IBCs is if they're in the database they often need additional work later. So that's really what, I forgot to say that at the beginning that's really one of the important things that you need to be able to do with IBC records is update them later. And so that that is possible now within the BIBFRAME editor. And so this just shows you pasting in that instance URI that you copied from the BIBFRAME database and submit the URL and URI. And it brings up the BIBFRAME instance. In this case, this is from the rare materials profile because I notice that it has collective title there, the other profiles don't have that. It also mentions here the RBMS terms that are part of that particular profile. So you can make updates to this record and send this description back into the BIBFRAME database. This is just showing some of the changes that you might want to make. I didn't make any changes to this to this description but maybe you want to update the extent or add an extent or make in the instance profile. So over here on the left I'm making a change to the instance profile by changing the extent and over there on the right I'm looking at the work profile to add illustrative content to the description. And then it's posted back into the BIBFRAME editor. And that's the basic steps for the IBC. ^M00:39:00 ^M00:39:04 Do you want to open this up for questions now for all of the speakers? >> Beacher Wiggins: So we're open for questions, comments. ^M00:39:13 ^M00:39:16 [ Inaudible ] ^M00:39:39 >> Nate Trail: Well Stanford and Harvard [inaudible] for people [inaudible]. We're going to be downloading our stuff and figuring out what [inaudible]. They're the major players in this. ^M00:39:58 There's a bunch of Europeans that are very interested too, Scandinavian National Library is [inaudible], but they're not actually using our records [inaudible]. But everyone was very excited to [inaudible]. >> Have we got any feedback on anyone using the [inaudible]? >> Nate Trail: There's one guy who found some errors in it so. >> That means they're looking. >> Nate Trail: You know that they're looking. >> Jodi Williamschen: And the editor profiles are available for download in GitHub and I know that Stanford has downloaded them. And made some modifications so that the lookups will work because everything on the editor profiles is very tied to the LC ecosystem which means it doesn't really play outside the network. And so I'm pretty sure Stanford has tried to modify the profiles to work on their system. Regina. ^M00:40:58 [ Inaudible ] ^M00:41:32 There are a lot of things about the new RDA toolkit that worry me, that's just one of them. And I think it's something we're really going to have to figure out. I'm going to, there's a program at ALA on Monday morning about the toolkit that I'm going to go to, to get enlightened. I'm also worried about the loss of rule numbers since we have a lot of hot linking in the editor that is tied to the URL for the rule. >> Beacher Wiggins: [Inaudible] thing is that we will have a full year from the time [inaudible] toolkit and its content are stable which will be no sooner [inaudible] input in December of this year, so we have a year. And I've made it clear to our policy specialists that we want to determine how we want to move forward with it. And whatever linkages we can make with BIBFRAME [inaudible]. So there will be a lot, I won't say a lot, but there will be much behind the scenes work to solicit [inaudible] to make sure we roll this piece out as appropriately and as seamlessly [inaudible]. ^M00:42:53 >> Nate Trail: But the impact is not as large as you might think because it's more of a profile issue. ^M00:43:01 The basic underpinnings of the editor will work the same it will just allow the cataloger to produce more individual works and then it can be tied together. ^M00:43:12:20 So the configuration step as we call it [inaudible]. ^M00:43:18 But basically what Jodi and Les [inaudible]. ^M00:43:30 [ Inaudible ] ^M00:44:01 Well I think it's as good as your MARC cataloging was because the history of all the title changes has got to be recorded in the original MARC [inaudible]. ^M00:44:10 So when you look at Voyager records you can see previous, next, you know, merged into and all that. So it'll be just new ^M00:44:17 ways of expressing that information and us being able to write a query that says go get me that history and show it to me in the new presentation. ^M00:44:26 Just like the name title for [inaudible] browse that I showed we could do something like that for titles. ^M00:44:34 >> And removing the unnecessary information the display so you really could configure I just want title, the dates, the publisher [inaudible]. ^M00:44:46 >> Nate Trail: Yeah, but the thing that we're doing is handcrafted so we could do a presentation like that easily. But when you're talking about a next generation catalog they're not going to do [inaudible] ^M00:44:57 unless that's the new, you know, the only new way of doing serials or something. So that might be needed to put into requirements and you'd be able to see that because we're not building the ^M00:45:07:20 library's next catalog we're building a demonstration that says here how you could take advantage of RDI. ^M00:45:17 >> I think I already talked down in one of the next gen meetings about that. ^M00:45:25 >> Jodi Williamschen: There. ^M00:45:27 >> I'm wondering are there any plans to move BIBFRAME from browser into [inaudible] application? And along with that thinking of advantages that, you know, ^M00:45:36:15 the big one would be may not requiring a constant internet connection but also you know build [inaudible] new key commands and kind of like navigate around more quickly that way. ^M00:45:46 ^M00:45:52 >> Nate Trail: Well the basic premise of linked data is that you're connected to everything [inaudible]. And so internet connectivity is kind of built in. ^M00:46:03 But as far as like hotkeys for stuff, anyone can do that right now. You can do that even your browser to tell your browser this is what I'm going to do, it doesn't have to be a standalone application. ^M00:46:17 ^M00:46:22 >> Jodi Williamschen: Way in the back was next, then Jessalyn. ^M00:46:26 >> As a working cataloger I noticed that a snappy response time is one of the keys to getting records in the system. ^M00:46:36 MARC [inaudible] deficiency for very few characters to transmit your individual [inaudible] record. ^M00:46:46 This system is going to take more processing time. I have due concerns about [inaudible] and the ^M00:46:56 priority as we upgrade our systems in general that BIBFRAME the catalog has to be fast. And two, with the changes in net neutrality internet service is spotty and I'm in the [inaudible]. ^M00:47:10 Is there any way we can assure that our internet doesn't get dialed back so that we still work [inaudible] government employees? ^M00:47:20 >> Beacher Wiggins: That's a big question. >> Jodi Williamschen: Yes. ^M00:47:23 >> Beacher Wiggins: That's a question [inaudible] all of us frankly. ^M00:47:26 >> Jodi Williamschen: Yeah. ^M00:47:28 >> Beacher Wiggins: These things are brought up at various meetings but I don't think we can answer that question. ^M00:47:34 >> Nate Trail: And I think any, if you're a cataloger in the current environment slowness is kind of built in because we're building ourselves all the pieces that are necessary to make something work. ^M00:47:46 We're not trying to make a fast machine right now. So we're in the stage of can it be done not how do we make it seamless. ^M00:47:55 So your slowness will be solved with a vendor supply RDF editor, etcetera system otherwise they won't be able to sell it. ^M00:48:07 >> Beacher Wiggins: And some swap-outs that are going on now with the desktops and plug-ins will bring some relief. ^M00:48:16 And certainly our colleagues in OCIO [inaudible] are. ^M00:48:24 If not worried very [inaudible]. ^M00:48:30 >> Jodi Williamschen: So Jessalyn I saw you. ^M00:48:34 [ Inaudible ] ^M00:49:36 I can answer the second question. It would be great to have an OCLC gateway that dropped data right into the BIBFRAME database but there's a lot of technological hurdles between that happening now and in the future. The main one is that BIBFRAME editor and the database are on test servers internally and are not open to the outside. ^M00:49:59 And to link up to OCLC there'd have to be a lot of port traffic opened up. And I don't think we're ready to go there yet. But we understand it's very important because downloading records from any utility is vital for copy cataloging. I think if we move more into a production environment that would be one of the top things to get resolved. And I'm hoping Les can answer the first one. >> Les Hawkins: Well no, but I mean thin this is, in one way this is how we're able to do some copy cataloging. We still have to download the records from OCLC. If you work with serials you have to do your updating and your work in OCLC, then you're bringing that record into Voyager if it's held by LC or if it's an ISSN record, a prepublication record. Those ISSN prepublication records are IBC records, they're coded as such in the Voyager database. So yes, I have an example of a prepublication record that can be loaded with that process that I just showed you. So it could be updated in the BIBFRAME editor by loading that IBC record. Other types of copy that could be loaded through the load IBC record I believe if there are IBCs in the Voyager database that you've downloaded from OCLC they can be uploaded and worked on in the BIBFRAME database. Yes. ^M00:51:21 [ Inaudible ] ^M00:51:51 If they are, if you have coded a record IBC in the Voyager database yes you can use that mechanism that we just showed you. >> Jodi Williamschen: Actually you can load in any record that you want. >> Les Hawkins: I was going to say I thought that happened. So when, you know, we're talking about serials and there are lots of post-publication serials that we work with, we work with them in OCLC and we download them into the Voyager database. But I hadn't experimented with that. But I'm pretty sure with the upload IBC you can upload any of those records that you've downloaded from OCLC and work with them in the BIBFRAME editor. ^M00:52:22 [ Inaudible ] ^M00:52:45 >> Nate Trail: Now I think you go into the different database, if the record was in OCLC and got into the ILS and then the next day it was converted into a different. You can call it up in BIBFRAME as though it was an IBC. Use whatever profile now that you think is appropriate and it will populate in the editor, change what you need to do and save it back as though it was an IBC. You don't have to change the 906 or whatever code in order to make it work you just call [inaudible]. >> Les Hawkins: And so all of those records are being, on a daily basis all those records that you've downloaded whether you've not worked on them or not they're being converted to BIBFRAME, they're available in the BIBFRAME database. ^M00:53:29 [ Inaudible ] ^M00:53:40 This is a new feature, this is why we're excited about this. >> Jodi Williamschen: If you can find the record in the BIBFRAME database you can recall it in the BIBFRAME editor if it's an instance. ^M00:53:51 [ Inaudible ] ^M00:54:39 >> Les Hawkins: Got one question in the back. ^M00:54:40 [ Inaudible ] ^M00:54:53 >> Jodi Williamschen: Yeah, I think a lot of the issues with the connecting up to OCLC is on the technology side, getting the machines to talk to each other. ^M00:55:01 ^M00:55:05 [ Inaudible ] ^M00:55:57 I know what I want to do we just have to figure out if it's going to work. Because part of the thing is that it's two bytes in the 008 field, and it's kind of connected to one BIBFRAME property. And are we losing anything else that's expressed in that regulatory byte to just have it say irregular. But we'll get there because it's an RDA registry [inaudible] so we should be following that model as well. Jessalyn. ^M00:56:31 [ Inaudible ] ^M00:56:50 Since the editor is browser based it's pretty accepting of Unicode. The challenge has been character set mapping, so if you have a Chinese keyboard or not our Hebrew cataloger in the pilot has noted some oddities in searching depending on the diacritic. So there's a few things that need to be ironed out but it's more on a data input end I think. The storage and everything getting put back into the database everything looks fantastic. ^M00:57:28 ^M00:57:31 [ Inaudible ] ^M00:57:33 Yes, since you are the Hebrew cataloger Rojay [assumed spelling]. ^M00:57:37 [ Inaudible ] ^M00:58:04 That's great. >> Les Hawkins: Well it's 11 o'clock right now, how do you all wrap this up? >> Jodi Williamschen: We need to go to 11:30 if people still have questions. >> Les Hawkins: It's 11:30, oh I'm so sorry. >> Beacher Wiggins: Are you trying to. >> Les Hawkins: And this is being filmed, isn't it? >> Jodi Williamschen: Everybody wants to go, you know, Les wants to go to the parade. ^M00:58:22 [ Inaudible ] ^M00:58:25 >> Les Hawkins: I think there was at last another question, sorry was there? >> Beacher Wiggins: Other, did we see another hand? If not, then for those who are interested in [inaudible] to the capitol parade you'll still have time to do that. Thank you all for showing up. >> Jodi Williamschen: Thanks. ^M00:58:43 [ Applause ] ^M00:58:46 >> This has been a presentation of the Library of Congress. Visit at loc.gov. ^E00:58:52