hmmm...
maybe there's a misunderstanding about entropy of the input file as
specified in the question vs. entropy as defined in the course packet.
look at the example given on page 11 of the course packet. a two symbol
alphabet with probabilities 0.9 and 0.1 yields an entropy of h = 0.469.
however, using the formula given in question 6, the entropy of a 10
character message from this same alphabet (including frequencies) yields an
entropy of 3.3219, far beyond the range of h + 1.
is anyone out there able to explain this without giving away too much???
is there a fundamental misunderstanding here?
Post by Jamie HargroveI think I've got both 5 and 6 done - for anyone still working on this,
I'd suggest trying to find the quantities involved first rather than
taking the arcane looking formulas and staring at them, I made much more
progress from the other end. For number 6, having been exposed to
combinatorics helps in my opinion, as does going back to look at the
definition of entropy and thinking about what is different in this case.
Post by JackAnybody made significant headway on these problems? I'm doing a great job
spinning my wheels, and getting nowhere :D.
It seems like the thing to do is manipulate the notion of entropy to fit
these formulas... is that what you guys are doing?
.