KeyGenNinja.com | 4 Catalog

KeyGenNinja.com | 4 Catalog

KeyGenNinja.com | 4 Catalog

KeyGenNinja.com | 4 Catalog

This section details the graph catalog operations available to manage named graph projections within the Neo4j Graph Data Science library.

Graph algorithms run on a graph data model which is a projection of the Neo4j property graph data model. A graph projection can be seen as a view over the stored graph, containing only analytically relevant, potentially aggregated, topological and property information. Graph projections are stored entirely in-memory using compressed data structures optimized for topology and property lookup operations.

The graph catalog is a concept within the GDS library that allows managing multiple graph projections by name. Using its name, a created graph can be used many times in the analytical workflow. Named graphs can be created using either a Native projection or a Cypher projection. After usage, named graphs can be removed from the catalog to free up main memory.

Graphs can also be created when running an algorithm without placing them in the catalog. We refer to such graphs as anonymous graphs.

The graph catalog exists as long as the Neo4j instance is running. When Neo4j is restarted, graphs stored in the catalog are lost and need to be re-created.

This chapter explains the available graph catalog operations.

Creating, using, listing, and dropping named graphs are management operations bound to a Neo4j user. Graphs created by a different Neo4j user are not accessible at any time.

4.1.1. Creating graphs in the catalog

A projected graph can be stored in the catalog under a user-defined name. Using that name, the graph can be referred to by any algorithm in the library. This allows multiple algorithms to use the same graph without having to re-create it on each algorithm run.

There are two variants of projecting a graph from the Neo4j database into main memory:

  • Native projection

    • Provides the best performance by reading from the Neo4j store files. Recommended to be used during both the development and the production phase.
  • Cypher projection

    • The more flexible, expressive approach with lesser focus on performance. Recommended to be primarily used during the development phase.

There is also a way to generate a random graph, see Graph Generation documentation for more details.

In this section, we will give brief examples on how to create a graph using either variant. For detailed information about the configuration of each variant, we refer to the dedicated sections.

In the following two examples we show how to create a graph called that contains nodes and relationships.

Create a graph using a native projection: 

We can also use Cypher to select the nodes and relationships to be projected into the in-memory graph.

Create a graph using a Cypher projection: 

After creating the graphs in the catalog, we can refer to them in algorithms by using their name.

Run Page Rank on one of our created graphs: 

4.1.2. Listing graphs in the catalog

Once we have created graphs in the catalog, we can list information about either all of them or a single graph using its name.

List information about all graphs in the catalog: 

List information about a named graph in the catalog: 

The and columns are primarily applicable to Native projection. The and columns are applicable only to Cypher projection and are for graphs created with Native projection.

The is more time-consuming to compute than the other return columns. It is however only computed when included in the subclause.

The consists of information about the nodes and relationships stored in the graph. For each node label, the schema maps the label to its property keys and their corresponding property types. Similarly, the schema maps the relationship types to their property keys and property types. The property type is either or .

The indicates when the graph was created in memory. The indicates when the graph was updated by an algorithm running in mode. The yields the number of bytes used in the Java Heap to store that graph. The is the same information in a human readable format.

List information about the degree distribution of a specific graph: 

4.1.3. Check if a graph exists in the catalog

We can check if a graph is stored in the catalog by looking up its name.

Check if a graph exists in the catalog: 

4.1.4. Removing node properties from a named graph

We can remove node properties from a named graph in the catalog. This is useful to free up main memory or to remove accidentally created node properties.

Remove multiple node properties from a named graph: 

The above example requires all given properties to be present on at least one node projection, and the properties will be removed from all such projections.

The procedure can be configured to remove just the properties for some specific node projections. In the following example, we ran an algorithm on a sub-graph and subsequently remove the newly created property.

Remove node properties of a specific node projection: 

When a list of projections that are not is specified, as in the example above, a different validation and execution is applied; It is then required that all projections have all of the given properties, and they will be removed from all of the projections.

If any of the given projections is , the procedure behaves like in the first example.

4.1.5. Deleting relationship types from a named graph

We can delete all relationships of a given type from a named graph in the catalog. This is useful to free up main memory or to remove accidentally created relationship types.

Delete all relationships of type T from a named graph: 

4.1.6. Removing graphs from the catalog

Once we have finished using the named graph we can remove it from the catalog to free up memory.

Remove a graph from the catalog: 

4.1.7. Stream node properties

We can stream node properties stored in a named in-memory graph back to the user. This is useful if we ran multiple algorithms in mode and want to retrieve some or all of the results. This is similar to what the execution mode does, but allows more fine-grained control over the operations.

Stream multiple node properties: 

The above example requires all given properties to be present on at least one node projection, and the properties will be streamed for all such projections.

The procedure can be configured to stream just the properties for some specific node projections. In the following example, we ran an algorithm on a sub-graph and subsequently streamed the newly created property.

Stream node properties of a specific node projection: 

When a list of projections that are not is specified, as in the example above, a different validation and execution is applied. It is then required that all projections have all of the given properties, and they will be streamed for all of the projections.

If any of the given projections is , the procedure behaves like in the first example.

When streaming multiple node properties, the name of each property is included in the result. This adds with some overhead, as each property name must be repeated for each node in the result, but is necessary in order to distinguish properties. For streaming a single node property this is not necessary. streams a single node property from the in-memory graph, and omits the property name. The result has the format , , as is familiar from the streaming mode of many algorithm procedures.

Stream a single node property: 

4.1.8. Stream relationship properties

We can stream relationship properties stored in a named in-memory graph back to the user. This is useful if we ran multiple algorithms in mode and want to retrieve some or all of the results. This is similar to what the execution mode does, but allows more fine-grained control over the operations.

Stream multiple relationship properties: 

The procedure can be configured to stream just the properties for some specific relationship projections. In the following example, we ran an algorithm on a sub-graph and subsequently streamed the newly created property.

Stream relationship properties of a specific relationship projection: 

When a list of projections that are not is specified, as in the example above, a different validation and execution is applied. It is then required that all projections have all of the given properties, and they will be streamed for all of the projections.

If any of the given projections is , the procedure behaves like in the first example.

When streaming multiple relationship properties, the name of the relationship type and of each property is included in the result. This adds with some overhead, as each type name and property name must be repeated for each relationship in the result, but is necessary in order to distinguish properties. For streaming a single relationship property, the property name can be left out. streams a single relationship property from the in-memory graph, and omits the property name. The result has the format , , , .

Stream a single relationship property: 

4.1.9. Write node properties to Neo4j

Similar to streaming properties stored in an in-memory graph it is also possible to write those back to Neo4j. This is similar to what the execution mode does, but allows more fine-grained control over the operations.

The properties to write are typically the values that were used when running algorithms. Properties that were added to the created graph at creation time will often already be present in the Neo4j database.

Write multiple node properties to Neo4j: 

The above example requires all given properties to be present on at least one node projection, and the properties will be written for all such projections.

The procedure can be configured to write just the properties for some specific node projections. In the following example, we ran an algorithm on a sub-graph and subsequently wrote the newly created property to Neo4j.

Write node properties of a specific node projection to Neo4j: 

When a list of projections that are not is specified, as in the example above, a different validation and execution is applied; It is then required that all projections have all of the given properties, and they will be written to Neo4j for all of the projections.

If any of the given projections is , the procedure behaves like in the first example.

4.1.10. Write relationships to Neo4j

We can write relationships stored in a named in-memory graph back to Neo4j. This can be used to write algorithm results (for example from Node Similarity) or relationships that have been aggregated during graph creation.

The relationships to write are specified by a relationship type. This can either be an element identifier used in a relationship projection during graph construction or the used in algorithms that create relationships.

Write relationships to Neo4j: 

By default, no relationship properties will be written. To write relationship properties, these have to be explicitly specified.

Write relationships and their properties to Neo4j: 

4.1.11. Create Neo4j databases from named graphs

We can create new Neo4j databases from named in-memory graphs stored in the graph catalog. All nodes, relationships and properties present in an in-memory graph are written to a new Neo4j database. This includes data that has been projected in and data that has been added by running algorithms in mode. The newly created database will be stored in the Neo4j directory using a given database name.

The feature is useful in the following, exemplary scenarios:

  • Avoid heavy write load on the operational system by exporting the data instead of writing back.
  • Create an analytical view of the operational system that can be used as a basis for running algorithms.
  • Produce snapshots of analytical results and persistent them for archiving and inspection.
  • Share analytical results within the organization.

Export a named graph to a new database in the Neo4j databases directory: 

The procedure yields information about the number of nodes, relationships and properties written.

NameTypeDefaultOptionalDescription

dbName

String

No

Name of the exported Neo4j database.

writeConcurrency

Boolean

yes

The number of concurrent threads used for writing the database.

enableDebugLog

Boolean

yes

Prints debug information to Neo4j log files.

batchSize

Integer

yes

Number of entities processed by one single thread at a time.

defaultRelationshipType

String

yes

Relationship type used for relationship projections.

The new database can be started using .

The database must not exist when using the export procedure, it needs to be created manually using the following commands.

After running the procedure, we can start a new database and query the exported graph: 

Источник: [https://torrent-igruha.org/3551-portal.html]
, KeyGenNinja.com | 4 Catalog

What is a Catalog?

Chances are, you set up a catalog when you first installed Luminar. The catalog contains all of the information about your files including metadata like ratings and labels as well as any edits you make with the tools.

Remember, the edits you make in Luminar are always non-destructive when you work with a library. This means you are not changing the actual files but rather capturing the instructions into a database. When you are ready to share or use the file elsewhere you’ll export the image and apply the edits.

By default, Luminar’s catalog is installed in your Pictures folder. This isn’t an issue as long as you don’t include the Catalog folder when adding images. Doing this will add all preview images of the catalog to the Gallery/library.

The contents of a catalog are stored inside a folder. If backing up or moving the catalog, be sure to take the entire folder.

Источник: [https://torrent-igruha.org/3551-portal.html]
KeyGenNinja.com | 4 Catalog

4. Catalog Administration

Set these controls before you start cataloging on your Koha system.

  • Get there: More > Administration > Catalog

4.1. MARC Bibliographic Frameworks

Think of Frameworks as templates for creating new bibliographic records. Koha comes with some predefined frameworks that can be edited or deleted, and librarians can create their own frameworks for content specific to their libraries.

  • Get there: More > Administration > Catalog > MARC Bibliographic Frameworks

Important

Do not delete or edit the Default Framework since this will cause problems with your cataloging records - always create a new template based on the Default Framework, or alter the other Frameworks.

To add a new framework

  • Click 'New Framework'

    • Enter a code of 4 or fewer characters

    • Use the Description field to enter a more detailed definition of your framework

  • Click 'Submit'

  • Once your Framework is added click 'MARC structure' to the right of it on the list of Frameworks

    • You will be asked to choose a Framework to base your new Framework off of, this will make it easier than starting from scratch

  • Once your Framework appears on the screen you can edit or delete each field by following the instructions for editing subfields

4.1.2. Edit Existing Frameworks

Clicking 'Edit' to the right of a Framework will only allow you to edit the Description for the Framework:

To make edits to the fields associated with the Framework you must first click 'MARC Structure' and then follow the instructions for editing subfields

4.1.3. Add subfields to Frameworks

To add a field to a Framework click the 'New Tag' button at the top of the Framework definition

This will open up a blank form for entering MARC field data

Enter the information about your new tag:

  • The 'Tag' is the MARC field number

  • The 'Label for lib' is the text that will appear in the staff client when in the cataloging module

  • The 'Label for OPAC' is the text that will appear in the OPAC when viewing the MARC version of the record

  • If this field can be repeated, check the 'Repeatable' box

  • If this field is mandatory, check the 'Mandatory' box

  • If you want this field to be a pull down with limited possible answers, choose which 'Authorized value' list you want to use

When you're finished, click 'Save Changes' and you will be presented with your new field

To the right of the new field is a link to 'Subfields,' you will need to add subfields before this tag will appear in your MARC editor. The process of entering the settings for the new subfield is the same as those found in the editing subfields in frameworks section of this manual.

4.1.4. Edit Framework Subfields

Frameworks are made up of MARC fields and subfields. To make edits to most Frameworks you must edit the fields and subfields. Clicking 'Edit' to the right of each subfield will allow you to make changes to the text associated with the field

  • Each field has a tag (which is the MARC tag)

    • The 'Label for lib' is what will show in the staff client if you have advancedMARCeditor set to display labels

    • The 'Label for OPAC' is what will show on the MARC view in the OPAC

    • If you check 'Repeatable' then the field will have a plus sign next to it allowing you to add multiples of that tag

    • If you check 'Mandatory' the record will not be allowed to save unless you have a value assigned to this tag

    • 'Authorized value' is where you define an authorized value that your catalogers can choose from a pull down to fill this field in

To edit the subfields associated with the tag, click 'Subfields' to the right of the tag on the 'MARC Structure' listing

  • From the list of subfields you can click 'Delete' to the right of each to delete the subfields

  • To edit the subfields click 'Edit Subfields'

  • For each subfield you can set the following values

    • Text for librarian

      • what appears before the subfield in the librarian interface

    • Text for OPAC

      • what appears before the field in the OPAC.

        • If left empty, the text for librarian is used instead

    • Repeatable

      • the field will have a plus sign next to it allowing you to add multiples of that tag

    • Mandatory

      • the record will not be allowed to save unless you have a value assigned to this tag

    • Managed in tab

      • defines the tab where the subfield is shown. All subfields of a given field must be in the same tab or ignored. Ignore means that the subfield is not managed.

    • Default value

      • defines what you want to appear in the field by default, this will be editable, but it saves time if you use the same note over and over or the same value in a field often.

    • hidden

      • allows you to select from 19 possible visibility conditions, 17 of which are implemented. They are the following:

        • -9 => Future use

        • -8 => Flag

        • -7 => OPAC !Intranet !Editor Collapsed

        • -6 => OPAC Intranet !Editor !Collapsed

        • -5 => OPAC Intranet !Editor Collapsed

        • -4 => OPAC !Intranet !Editor !Collapsed

        • -3 => OPAC !Intranet Editor Collapsed

        • -2 => OPAC !Intranet Editor !Collapsed

        • -1 => OPAC Intranet Editor Collapsed

        • 0 => OPAC Intranet Editor !Collapsed

        • 1 => !OPAC Intranet Editor Collapsed

        • 2 => !OPAC !Intranet Editor !Collapsed

        • 3 => !OPAC !Intranet Editor Collapsed

        • 4 => !OPAC Intranet Editor !Collapsed

        • 5 => !OPAC !Intranet !Editor Collapsed

        • 6 => !OPAC Intranet !Editor !Collapsed

        • 7 => !OPAC Intranet !Editor Collapsed

        • 8 => !OPAC !Intranet !Editor !Collapsed

        • 9 => Future use • URL : if checked, the subfield is an URL, and can be clicked

      • ( ! means 'not visible' or in the case of Collapsed 'not Collapsed')

    • Is a URL

      • if checked, it means that the subfield is a URL and can be clicked

    • Link

      • If you enter a field/subfield here (200b), a link appears after the subfield in the MARC Detail view. This view is present only in the staff client, not the OPAC. If the librarian clicks on the link, a search is done on the database for the field/subfield with the same value. This can be used for 2 main topics :

        • on a field like author (200f in UNIMARC), put 200f here, you will be able to see all bib records with the same author.

        • on a field that is a link (4xx) to reach another bib record. For example, put 011a in 464$x, will find the serials that are with this ISSN.

      • Warning

        This value should not change after data has been added to your catalog

    • Koha link

      • Koha is multi-MARC compliant. So, it does not know what the 245$a means, neither what 200$f (those 2 fields being both the title in MARC21 and UNIMARC). So, in this list you can "map" a MARC subfield to its meaning. Koha constantly maintains consistency between a subfield and its meaning. When the user want to search on "title", this link is used to find what is searched (245 if you're MARC21, 200 if you're UNIMARC).

    • Authorized value

      • means the value cannot by typed by the librarian, but must be chosen from a pull down generated by the authorized value list

      • In the example above, the 504a field will show the MARC504 Authorized Values when cataloging

    • Thesaurus

      • means that the value is not free text, but must be searched in the authority/thesaurus of the selected category

    • Plugin

      • means the value is calculated or managed by a plugin. Plugins can do almost anything.

      • For example, in UNIMARC there are plugins for every 1xx fields that are coded fields. The plugin is a huge help for cataloger ! There are also two plugins (unimarc_plugin_210c and unimarc_plugin_225a that can "magically" find the editor from an ISBN, and the collection list for the editor)

  • To save your changes simply click the 'Save Changes' button at the top of the screen

4.2. Koha to MARC Mapping

While Koha stores the entire MARC record, it also stores common fields for easy access in various tables in the database. Koha to MARC Mapping is used to tell Koha where to find these values in the MARC record. In many cases you will not have to change the default values set by in this tool on installation, but it is important to know that the tool is here and can be used at any time.

  • Get there: More > Administration > Catalog > Koha to MARC Mapping

The Koha to MARC Mapping page offers you the option of choosing from one of three tables in the database to assign values to.

After choosing the table you would like to view, click 'OK.' To edit any mapping click on the 'Koha Filed' or the 'Edit' link.

Choose which MARC field you would like to map to this Koha Field and click the 'OK' button. If you would like to clear all mappings, click the 'Click to "Unmap"' button.

Important

At this time you can map only 1 MARC field to 1 Koha field. This means that you won't be able to map both the 100a and the 700a to the author field, you need to choose one or the other.

4.3. Keywords to MARC Mapping

This tool will allow you to map MARC fields to a set of predefined keywords.

  • Get there: More > Administration > Catalog > Keywords to MARC Mapping

At this time the only keyword in use is 'subtitle.'

Using this tool you can define what MARC field prints to the detail screen of the bibliographic record using keywords. The following example will use the subtitle field.

Using the Framework pull down menu, choose the Framework you would like to apply this rule to. For example, the subtitle for books can be found in the 245$b field.

However the subtitle for DVDs appears in 245$p

Using this tool you can tell Koha to print the right field as the subtitle when viewing the bibliographic record in the OPAC.

This tool can be used to chain together pieces of the record as well. If you want the series number to show in the title on your search results you simply have to map 490 $v to 'subtitle' along with the 245 $b.

Tip

Chain together the fields you want to show after the item title in the order in which you want them to appear.

Future developments will include additional keyword assigned fields.

4.4. MARC Bibliographic Framework Test

Checks the MARC structure.

  • Get there: More > Administration > Catalog > MARC Bibliographic Framework Test

If you change your MARC Bibliographic framework it's recommended that you run this tool to test for errors in your definition.

Authority Types are basically MARC Frameworks for Authority records and because of that they follow the same editing rules found in the MARC Bibliographic Frameworks section of this manual. Koha comes with many of the necessary Authority frameworks already installed. To learn how to add and edit Authority Types, simply review the MARC Bibliographic Frameworks section of this manual.

  • Get there: More > Administration > Catalog > Authority Types

4.6. Classification Sources

SavitraSirohi

Nicole C.Engard

Fixed typos, changed content where necessary and added new screenshots.

August 2010

Source of classification or shelving scheme is an Authorized Values category that is mapped to field 942$2 in Koha's MARC Bibliographic frameworks.

  • Get there: More > Administration > Catalog > Classification sources

Commonly used values of this field are:

  • ddc - Dewey Decimal Classification

  • lcc - Library of Congress Classification

If you chose to install classification sources during Koha's installation, you would see other values too:

  • ANSCR (sound recordings)

  • SuDOC classification

  • Universal Decimal Classification

  • Other/Generic Classification

4.6.1. Adding/Editing Classification Sources

You can add your own source of classification by using the New Classification Source button. To edit use the Edit link.

When creating or editing:

  • You will need to enter a code and a description.

  • Check the 'Source in use?' checkbox if you want the value to appear in the drop down list for this category.

  • Select the appropriate filing rule from the drop down list.

4.6.2. Classification Filing Rules

Filing rules determine the order in which items are placed on shelves.

Values that are pre-configured in Koha are:

Filing rules are mapped to Classification sources. You can setup new filing rules by using the New Filing Rule button. To edit use the Edit link.

When creating or editing:

  • Enter a code and a description

  • Choose an appropriate filing routine - dewey, generic or lcc

4.7. Record Matching Rules

Record matching rules are used when importing MARC records into Koha.

  • Get there: More > Administration > Catalog > Record Matching Rules

The rules that you set up here will be referenced with you Stage MARC Records for Import.

To create a new matching rule :

  • Click 'New Record Matching Rule'

    • Choose a unique name and enter it in the 'Matching rule code' field

    • 'Description' can be anything you want to make it clear to you what rule you're picking

    • 'Match threshold' - The total number of 'points' a biblio must earn to be considered a 'match'

    • Match points are set up to determine what fields to match on

    • 'Search index' can be found by looking at the ccl.properties file on your system which tells the zebra indexing what data to search for in the MARC data".

    • 'Score' - The number of 'points' a match on this field is worth. If the sum of each score is greater than the match threshold, the incoming record is a match to the existing record

    • Enter the MARC tag you want to match on in the 'Tag' field

    • Enter the MARC tag subfield you want to match on in the 'Subfields' field

    • 'Offset' - For use with control fields, 001-009

    • 'Length' - For use with control fields, 001-009

    • Koha only has one 'Normalization rule' that removes extra characters such as commas and semicolons. The value you enter in this field is irrelevant to the normalization process.

    • 'Required match checks' - ??

4.7.1. Sample Record Matching Rule: Control Number

  • Match threshold: 100

  • Matchpoints (just the one):

  • Search index: Control-number

  • Score: 101

  • Tag: 001

    • Note

      this field is for the control number assigned by the organization creating, using, or distributing the record

  • Subfields: a

  • Offset: 0

  • Length: 0

  • Normalization rule: Control-number

  • Required Match checks: none (remove the blank one)

Источник: [https://torrent-igruha.org/3551-portal.html]
.

What’s New in the KeyGenNinja.com | 4 Catalog?

Screen Shot

System Requirements for KeyGenNinja.com | 4 Catalog

Add a Comment

Your email address will not be published. Required fields are marked *