| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Also added an ANALYZE statement to the end of the datafile generation
process. This generates information that allows sqlite to sometimes come
up with a better query plan, and in many cases can significant speed up
queries. This constitutes a minor database update, but because this is
the first version that uses the database versioning system, older
versions are essentially incompatible.
refs #2
|
|
|
|
| |
This commit contains a database update.
|
|
|
|
|
|
|
|
| |
`rhymes_with` now also contains `prerhyme` so that rhyming joins can be
convering. `notions_lemmas` and `lemmas_notions` have been created so as
to faciliate "jumping" over `words` when it's only needed as a
many-to-many through table. Because `notion_words` and `lemma_words` are
prefixes of these new indexes, they have been removed.
|
|
|
|
| |
These modifications can make some queries run significantly faster.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This involved adding a new type of filter; one that compares (currently
only equality and inequality) a field with another field located in an
enclosing join context.
In the process, it was discovered that simplifying the lemma::forms join
field earlier actually made some queries return inaccurate results
because the inflection of the form was being ignored and anything in the
lemma would be used because of the inner join. Because the existing
condition join did not allow for the condition field to be on the from
side of the join, two things were done: a condition version of
joinThrough was made, and lemma was finally eliminated as a top-level
object, replaced instead with a condition join between word and form
through lemmas_forms.
Queries are also now grouped by the first select field (assumed to be
the primary ID) of the top table, in order to eliminate duplicates
created by inner joins, so that there is a uniform distribution between
results for random queries.
Created a database index on pronunciations(rhyme) which decreases query
time for rhyming filters. The new database version is
backwards-compatible because no data or structure changed.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now, selrestrs are, instead of logically being a tree of
positive/negative restrictions that are ANDed/ORed together, they are a
flat set of positive restrictions that are ORed together. They are
stored as strings in a table called selrestrs, just like synrestrs,
which makes them a lot more queryable now as well. This change required
some changes to the VerbNet data, because we needed to consolidate any
ANDed clauses into single selrestrs, as well as convert any negative
selrestrs into positive ones. The changes made are detailed on the wiki.
Preposition choices are now encoded as comma-separated lists instead of
using JSON. This change, along with the selrestrs one, allows us to
remove verbly's dependency on nlohmann::json.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Groups are much less significant now, and they no longer have a database
table, nor are they considered a top level object anymore. Instead of
containing their own role data, that data is folded into the frames so
that it's easier to query; as a result, each group has its own copy of
the frames that it contains. Additionally, parts are considered top
level objects now, and you can query for frames based on attributes of
their indexed parts. Synrestrs are also contained in their own table
now, so that parts can be filtered against their synrestrs; they are
however not considered top level objects.
Created a new type of field, the "join where" or "condition join" field,
which is a normal join field that has a built in condition on a
specified field. This is used to allow creating multiple distinct join
fields from one object to another. This is required for the lemma::form
and frame::part joins, because filters for forms of separate inflections
should not be coalesced; similarly, filters on differently indexed frame
parts should not be coalesced.
Queries can now be ordered, ascending or descending, by a field, in
addition to randomly as before. This is necessary for accessing the
parts of a verb frame in the correct order, but may be useful to an end
user as well.
Fixed a bug with statement generation in that condition groups were not
being surrounded in parentheses, which made mixing OR groups and AND
groups generate inaccurate statements. This has been fixed;
additionally, parentheses are not placed around the top level condition,
and nested condition groups with the same logic type are coalesced, to
make query strings as easy to read as possible.
Also simplified the form::lemma field; it no longer conditions on the
inflection of the form like the lemma::form field does.
Also added a debug flag to statement::getQueryString that makes it
return a query string with all of the bindings filled in, for debug use
only.
|
| |
|
| |
|
|
|
|
|
|
| |
Previously, the generator would recognize at most one form per
inflection per lemma; now, the generator adds all variants in AGID to
the database.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The new object structure was designed to build on the existing WordNet
structure, while also adding in all of the data that we get from other sources.
More information about this can be found on the project wiki.
The generator has already been completely rewritten to generate a
datafile that uses the new structure. In addition, a number of indexes
are created, which does double the size of the datafile, but also allows
for much faster lookups. Finally, the new generator is written modularly
and is a lot more readable than the old one.
The verbly interface to the new object structure has mostly been
completed, but has not been tested fully. There is a completely new
search API which utilizes a lot of operator overloading; documentation
on how to use it should go up at some point.
Token processing and verb frames are currently unimplemented. Source for
these have been left in the repository for now.
|
|
|
|
| |
Also updated CMakeLists.txt such that including projects don't have to include sqlite3.
|
|
|
|
|
|
|
|
| |
The generator previously had a problem wherein it would ignore WordNet lemmas containing certain non-alpha characters (hyphens, slashes, numbers, apostrophes). In addition to these words not being included in the generated datafile, it had the side effect of causing relationships involving the ignored words (e.g. hypernymy, synonymy, etc) to instead be related to the word with id 0, which did not exist. This rarely caused a failure with direct queries; but it caused hierarchal queries (most notably full hyponymy, which is where the error was noticed) to potentially permit far more lemmas than they should have because a very large number of words could be transitively reached through the sentinel word id 0.
The generator has been fixed to not ignore the words containing special characters, which removed the word id 0 from most relationships and therefore fixed hierarchal queries. The only remaining word id 0s are as a synonym of "free-flying" (synset 301380571) and as an anti-mannernym of "aerially" (synset 400202718). This is because the WordNet data is malformed in the definitions of two words: "aerial" (synset 301380267) and "marine" (synset 301380721). The generator ignored those two lines, causing the described error, although the latter word being ignored did not cause any other errors.
The bug was discovered when the Twitter bot difference (https://github.com/hatkirby/difference) generated a tweet (https://twitter.com/differencebot/status/722084219925700613) as a result of returning the noun "tearaway" in a full hyponym query of "artifact".
|
|
|
|
|
|
| |
Rhyme detection now ensures that any rhymes it finds are perfect rhymes and not identical rhymes. Rhyme detection is also now a lot faster because additional information is stored in the datafile.
Also fixed a bug in the query interface (and the generator) that could cause incorrect queries to be executed.
|
|
|
|
|
|
| |
Datafile change: nouns now know how many images are associated with them on ImageNet, and also have their WordNet synset ID saved so that you can query for images of that noun via the ImageNet API. So far, verbly only exposes the ImageNet API URL, and doesn't actually interact with it itself. This may be changed in the future.
The query interface had a huge issue in which multiple instances of the same condition would overwrite each other. This has been fixed.
|
|
|
|
|
|
| |
adjectives, and adverbs
Word complexity refers to the number of words in a noun, adjective, or adverb.
|
|
|
|
|
|
|
|
|
|
| |
In addition:
- Added prepositions.
- Rewrote a lot of the query interface. It now, for a lot of relationships, supports nested AND, OR, and NOT logic.
- Rewrote the token class. It is now a union-like class instead of being polymorphic, which means smart pointers are no longer necessary.
- Querying with regards to word derivation has been temporarily removed.
- Sentinel values are now supported for all word types.
- The VerbNet data retrieved from http://verbs.colorado.edu/~mpalmer/projects/verbnet/downloads.html was found to not be perfectly satisfactory in some regards, especially regarding adjective phrases. A patch file is now included in the repository describing the changes made to the VerbNet v3.2 download for the canonical verbly datafile.
|
| |
|
| |
|
|
verbly into its own directory
|