about summary refs log tree commit diff stats
diff options
context:
space:
mode:
-rw-r--r--Makefile.am11
-rw-r--r--README.md40
-rw-r--r--ebooks.cpp (renamed from main.cpp)2
-rw-r--r--gen.cpp48
4 files changed, 94 insertions, 7 deletions
diff --git a/Makefile.am b/Makefile.am index c5b52ce..c9f61cf 100644 --- a/Makefile.am +++ b/Makefile.am
@@ -1,7 +1,10 @@
1AUTOMAKE_OPTIONS = subdir-objects 1AUTOMAKE_OPTIONS = subdir-objects
2ACLOCAL_AMFLAGS = ${ACLOCAL_FLAGS} 2ACLOCAL_AMFLAGS = ${ACLOCAL_FLAGS}
3 3
4bin_PROGRAMS = rawr-ebooks 4bin_PROGRAMS = rawr-ebooks rawr-gen
5rawr_ebooks_SOURCES = main.cpp kgramstats.cpp 5rawr_ebooks_SOURCES = ebooks.cpp kgramstats.cpp
6AM_CPPFLAGS = $(LIBTWITCURL_CFLAGS) $(YAML_CFLAGS) 6rawr_gen_SOURCES = gen.cpp kgramstats.cpp
7rawr_ebooks_LDADD = $(LIBTWITCURL_LIBS) $(YAML_LIBS) \ No newline at end of file 7rawr_ebooks_CPPFLAGS = $(LIBTWITCURL_CFLAGS)
8AM_CPPFLAGS = $(YAML_CFLAGS)
9rawr_ebooks_LDADD = $(LIBTWITCURL_LIBS) $(YAML_LIBS)
10rawr_gen_LDADD = $(YAML_LIBS) \ No newline at end of file
diff --git a/README.md b/README.md index 1462a9c..e01eb45 100644 --- a/README.md +++ b/README.md
@@ -1,4 +1,38 @@
1rawr-ebooks 1# rawr-ebooks
2===========
3 2
4you know 3*I suddenly found it very hilarious.* --[@Rawr_Ebooks](https://twitter.com/Rawr_Ebooks/status/385131476141879296)
4
5rawr-ebooks is a very good example of taking things too far. One of the assignments in the algorithms course I took was to implement an algorithm in SML that would generate nonsense statistically similar to an input corpus (basically, a plain text file with words and sentences in it). Of course, the actual point of the assignment was more focused on finding an algorithm that would do this in certain required cost bounds, but after the assignment ended, I decided that the project was too fun to let go and, combined with the recent revelation that [@Horse_Ebooks](https://twitter.com/Horse_Ebooks) was actually not a bot as widely believed, decided to augment my algorithm with the ability to post to Twitter.
6
7rawr-ebooks actually consists of two programs: `rawr-ebooks`, which generates nonsense and posts it to a Twitter account every hour, and `rawr-gen`, which generates nonsense on command. `rawr-gen` is probably more useful for the casual, well, anybody.
8
9Here is how one would go about compiling `rawr-gen`:
10
111. Clone rawr-ebooks onto your computer.
12
13 <pre>git clone http://github.com/hatkirby/rawr-ebooks</pre>
14
152. Use autoconf and automake to generate the configure file
16
17 <pre>autoreconf --install --force</pre>
18
193. Run configure
20
21 <pre>./configure</pre>
22
234. Make
24
25 <pre>make rawr-gen</pre>
26
275. Rename `config-example.yml` to `config.yml` and within it, replace `corpus.txt` with the path to your input
286. Run `rawr-gen`
29
30 <pre>./rawr-gen</pre>
31
32## Implementation details
33
34I ended up rewriting the algorithm in C++ as the SML implementation did not handle randomization very well and would have been very difficult to adjust to post to Twitter. The new version has many improvements that improve the quality of the generated output, and the input corpus that I use for @Rawr_Ebooks is growing every day. As of October 6th, 2013, it is about 200,000 words long.
35
36rawr-ebooks uses [yamlcpp](https://code.google.com/p/yaml-cpp/) to read configuration data from a file (mainly, where the input corpus is located, and the information used to connect to Twitter), and [twitcurl](https://code.google.com/p/twitcurl/) to post to Twitter.
37
38The program is roughly divided into two stages: a preprocessing stage and a generation stage. The preprocessing stage runs once at the beginning of the program's run and generates information to ease in the generation of output. This stage runs in O(t^2) time where t is the number of tokens in the input corpus. As you can probably tell, the preprocessing stage can take a fair bit of time to run sometimes. The generation stage actually generates the output and can occur multiple times per program run (in fact it should, otherwise you aren't making good use of the time spent during the preprocessing stage!). It runs in O(n log t) time, where t is the number of tokens in the input corpus, and n is the number of words to generate, which is usually between 5 and 50. As you can see, the generation stage runs far, far more quickly than the preprocessing stage. \ No newline at end of file
diff --git a/main.cpp b/ebooks.cpp index 20f1a1f..ed660a9 100644 --- a/main.cpp +++ b/ebooks.cpp
@@ -26,8 +26,10 @@ int main(int argc, char** args)
26 corpus += " " + line; 26 corpus += " " + line;
27 } 27 }
28 28
29 cout << "Preprocessing corpus..." << endl;
29 kgramstats* stats = new kgramstats(corpus, 5); 30 kgramstats* stats = new kgramstats(corpus, 5);
30 31
32 cout << "Generating..." << endl;
31 for (;;) 33 for (;;)
32 { 34 {
33 vector<string> doc = stats->randomSentence(rand() % 25 + 5); 35 vector<string> doc = stats->randomSentence(rand() % 25 + 5);
diff --git a/gen.cpp b/gen.cpp new file mode 100644 index 0000000..dc73e0f --- /dev/null +++ b/gen.cpp
@@ -0,0 +1,48 @@
1#include <cstdio>
2#include <list>
3#include <map>
4#include "kgramstats.h"
5#include <ctime>
6#include <vector>
7#include <cstdlib>
8#include <fstream>
9#include <iostream>
10#include <unistd.h>
11#include <yaml-cpp/yaml.h>
12
13using namespace::std;
14
15int main(int argc, char** args)
16{
17 srand(time(NULL));
18
19 YAML::Node config = YAML::LoadFile("config.yml");
20
21 ifstream infile(config["corpus"].as<std::string>().c_str());
22 string corpus;
23 string line;
24 while (getline(infile, line))
25 {
26 corpus += " " + line;
27 }
28
29 cout << "Preprocessing corpus..." << endl;
30 kgramstats* stats = new kgramstats(corpus, 5);
31
32 cout << "Generating..." << endl;
33 for (;;)
34 {
35 vector<string> doc = stats->randomSentence(rand() % 35 + 15);
36 string hi;
37 for (vector<string>::iterator it = doc.begin(); it != doc.end(); ++it)
38 {
39 hi += *it + " ";
40 }
41
42 cout << hi << endl;
43
44 getc(stdin);
45 }
46
47 return 0;
48} \ No newline at end of file