about summary refs log tree commit diff stats
path: root/kgramstats.cpp
Commit message (Collapse)AuthorAgeFilesLines
* Token generator now uses aspell to link different spellings of a wordKelly Rauchenberger2016-02-031-3/+56
| | | | This is the grand scheme for the multi-formed word design.
* Terminator characters in the middle of tokens are no longer strippedKelly Rauchenberger2016-02-031-11/+16
| | | | Emoticon checking is also now case sensitive, and a few more emoticons were added to the list.
* Fixed issue where closing opened delimiters wouldn't pop them off the stackKelly Rauchenberger2016-02-011-0/+2
| | | | This would cause a random quotation mark, for instance, to appear at the end of a tweet if a quote had been opened and closed naturally within the tweet.
* Added emoji freevarKelly Rauchenberger2016-02-011-3/+102
| | | | Strings of emojis are tokenized separately from anything else, and added to an emoticon freevar, which is mixed in with regular emoticons like :P. This breaks old-style freevars like $name$ and $noun$ so some legacy support for compatibility is left in but eventually $name$ should be made into an actual new freevar. Emoji data is from gemoji (https://github.com/github/gemoji).
* Rewrote how tokens are handledKelly Rauchenberger2016-01-291-176/+277
| | | | | | A 'word' is now an object that contains a distribution of forms that word can take. For now, most word just contain one form, the canonical one. The only special use is currently hashtags. Malapropisms have been disabled because of compatibility issues and because an upcoming feature is planned to replace it.
* hashtags are now randomizedKelly Rauchenberger2016-01-251-33/+82
|
* Did you know you can put comments in front of ascii art ↵Kelly Rauchenberger2016-01-051-0/+34
| | | | (https://twitter.com/rawr_ebooks/status/684376473369706498)
* Rewrote quite a bit of kgramstatsKelly Rauchenberger2016-01-041-277/+165
| | | | | | The algorithm still treats most tokens literally, but now groups together tokens that terminate a clause somehow (so, contain .?!,), without distinguishing between the different terminating characters. For each word that can terminate a sentence, the algorithm creates a histogram of the terminating characters and number of occurrences of those characters for that word (number of occurrences is to allow things like um???? and um,,,,, to still be folded down into um.). Then, when the terminating version of that token is invoked, a random terminating string is added to that token based on the histogram for that word (again, to allow things like the desu-ly use of multiple commas to end clauses). The algorithm now also has a slightly advanced kgram structure; a special "sentence wildcard" kgram value is set aside from normal strings of tokens that can match any terminating token. This kgram value is never printed (it is only ever present in the query kgrams and cannot actually be present in the histograms (it is of a different datatype)) and is used at the beginning of sentence generation to make sure that the first couple of words generated actually form the beginning of a sentence instead of picking up somewhere in the middle of a sentence. It is also used to reset sentence generation in the rare occasion that the end of the corpus is reached.
* guess what! the algorithmKelly Rauchenberger2015-12-301-31/+56
| | | | | | | this time it's a literal algorithm again not canonizing away punctuation newlines are actually considered new sentences now we look for the end of a sentence and then start after that
* You guessed it,,, twerked the algoKelly Rauchenberger2015-11-231-44/+41
|
* Added malapropismsKelly Rauchenberger2015-11-221-68/+93
|
* I may have made things better. I may have made things worse.Kelly Rauchenberger2015-11-221-5/+5
|
* Added some newline recognitionKelly Rauchenberger2015-07-241-31/+55
|
* Took into account question marks and exclamation marksKelly Rauchenberger2015-07-191-2/+2
|
* Stopped using C++11 because yamlcpp didn't like itKelly Rauchenberger2015-07-191-3/+6
|
* Kerjiggered the algorithmsKelly Rauchenberger2015-07-191-21/+166
|
* Modified kgram shortening rateKelly Rauchenberger2014-04-221-1/+1
|
* Stripped empty tokens from corpusFeffernoose2013-10-061-2/+8
|
* Rewrote weighted random number generatorFeffernoose2013-10-051-33/+37
| | | | | | The previous method of picking which token was the next one was flawed in some mysterious way that ended up picking various words that occurred only once in the input corpus as the first word of the generated output (most notably, "hysterically," "Anarchy," "Yorkshire," and "impunity.").
* Changed incidence of random kgram-trimmingFeffernoose2013-10-041-4/+10
| | | | Also added better terminal output
* Weighed token casing and presence of periodsFeffernoose2013-10-011-25/+67
| | | | | | | | Tokens which differ only by casing or the presence of an ending period are now considered the same token. When tokens are generated, they are cased based on the prevalence of Upper/Title/Lower casing of the token in the input corpus, and similarly, a period is added to the end of the word based on how often the same token was ended with a period in the input corpus.
* Wrote programFeffernoose2013-10-011-0/+110