about summary refs log tree commit diff stats
path: root/kgramstats.h
Commit message (Collapse)AuthorAgeFilesLines
* Converted to C++ style randomizationKelly Rauchenberger2019-02-281-1/+2
| | | | The logic in rawr::randomSentence with the cuts might be slightly different now but who even knows what's going on there.
* Interned tokens to reduce memory footprintKelly Rauchenberger2018-08-261-9/+14
|
* Marked rawr::randomSentence constKelly Rauchenberger2016-08-201-1/+1
|
* Added ability to require a minimum number of corpora in generated outputKelly Rauchenberger2016-05-311-0/+4
| | | | Also fixed a bug with tokenizing multiple corpora.
* Newlines, colons, and semicolons are now valid terminatorsKelly Rauchenberger2016-05-291-1/+15
|
* Pulled the ebooks functionality out into a libraryKelly Rauchenberger2016-05-201-95/+106
|
* Member hiding is funKelly Rauchenberger2016-03-081-1/+1
|
* Full sentences mode!Kelly Rauchenberger2016-03-081-1/+1
|
* Added emoji freevarKelly Rauchenberger2016-02-011-1/+4
| | | | Strings of emojis are tokenized separately from anything else, and added to an emoticon freevar, which is mixed in with regular emoticons like :P. This breaks old-style freevars like $name$ and $noun$ so some legacy support for compatibility is left in but eventually $name$ should be made into an actual new freevar. Emoji data is from gemoji (https://github.com/github/gemoji).
* Rewrote how tokens are handledKelly Rauchenberger2016-01-291-60/+64
| | | | | | A 'word' is now an object that contains a distribution of forms that word can take. For now, most word just contain one form, the canonical one. The only special use is currently hashtags. Malapropisms have been disabled because of compatibility issues and because an upcoming feature is planned to replace it.
* hashtags are now randomizedKelly Rauchenberger2016-01-251-4/+20
|
* Rewrote quite a bit of kgramstatsKelly Rauchenberger2016-01-041-11/+73
| | | | | | The algorithm still treats most tokens literally, but now groups together tokens that terminate a clause somehow (so, contain .?!,), without distinguishing between the different terminating characters. For each word that can terminate a sentence, the algorithm creates a histogram of the terminating characters and number of occurrences of those characters for that word (number of occurrences is to allow things like um???? and um,,,,, to still be folded down into um.). Then, when the terminating version of that token is invoked, a random terminating string is added to that token based on the histogram for that word (again, to allow things like the desu-ly use of multiple commas to end clauses). The algorithm now also has a slightly advanced kgram structure; a special "sentence wildcard" kgram value is set aside from normal strings of tokens that can match any terminating token. This kgram value is never printed (it is only ever present in the query kgrams and cannot actually be present in the histograms (it is of a different datatype)) and is used at the beginning of sentence generation to make sure that the first couple of words generated actually form the beginning of a sentence instead of picking up somewhere in the middle of a sentence. It is also used to reset sentence generation in the rare occasion that the end of the corpus is reached.
* Added malapropismsKelly Rauchenberger2015-11-221-8/+7
|
* Kerjiggered the algorithmsKelly Rauchenberger2015-07-191-0/+5
|
* Rewrote weighted random number generatorFeffernoose2013-10-051-1/+2
| | | | | | The previous method of picking which token was the next one was flawed in some mysterious way that ended up picking various words that occurred only once in the input corpus as the first word of the generated output (most notably, "hysterically," "Anarchy," "Yorkshire," and "impunity.").
* Weighed token casing and presence of periodsFeffernoose2013-10-011-3/+9
| | | | | | | | Tokens which differ only by casing or the presence of an ending period are now considered the same token. When tokens are generated, they are cased based on the prevalence of Upper/Title/Lower casing of the token in the input corpus, and similarly, a period is added to the end of the word based on how often the same token was ended with a period in the input corpus.
* Wrote programFeffernoose2013-10-011-0/+28