This is an update on what I have recently done to Hellish Bricks.
Progression Curves
The game got progression curves for running speed and initial jump speed.
These are important so that the character evolves, creating more gameplay
possibilities as the game progresses.
Colored Lights
I’ve also enabled lantern and ambient light commands so that the player can
change the lantern and the ambient light colors during the game or during
initialization, through the boot script.
Default Boot Script
# Set lantern to a light yellow.
lantern #EEEEBB
# Set ambient to a dark red.
ambient #443322
This is the first of what may end up being a series of posts regarding the
development of my closed-source game Hellish Bricks.
The game is heavily inspired on Devil Daggers (2016), but intends to focus on
low-poly and minimalist graphics instead of retro ones.
Hellish Bricks is written using modern C++ and requires at least OpenGL 3.3 and
just about 128 MiB of free memory for now.
I intend to write better and more visually appealing shaders in the future and
start working on some fancier graphical effects, such as particles and shadows.
Gameplay Video
Playable Demo
You need at least Windows Vista to run the game and you may also need to
download and install the Microsoft Visual C++ Redistributable for Visual Studio
2017 before being able to execute the game.
Recently I had to solve a problem which asked you to determine the bitwise and
of a range of nonnegative numbers. There is an obvious linear solution to this
problem which simply computes the bitwise and of the range.
Bitwise and of [4, 10] = 4 & 5 & 6 & 7 & 8 & 9 & 10
However, after thinking about how the anding ends up “erasing” bits permanently
I figured out the following logarithmic solution:
Essentially, if you have at least two numbers in the range, the last bit will be
zero, so you can compute the bitwise and of the prefixes and append a zero to
the result (by shifting it to the left).
In this post I will present a simple and somewhat generic solution for the
sliding maximum problem.
Find the maximum element for every K consecutive elements.
Note that the sliding minimum can be seen as a negated sliding maximum problem,
just like the maximum spanning tree can be seen as the minimum spanning tree of
the negated graph.
Implementation
Below is a generic C++ sliding window I implemented that takes a comparator as a
template parameter. This allows it to be instantiated for both the sliding
maximum and the sliding minimum.
template<typenameT,typenameComparator>classSlidingWindow{structBlock{Block(Tv,size_tw):value(v),width(w){}Tvalue;size_twidth;};Comparatorcomp;deque<Block>data;public:voidpush(constTt){size_twidth=1;while(!data.empty()&&comp(data.back().value,t)){data.pop_back();width++;}data.emplace_back(t,width);}Tget()const{returndata.front().value;}voidpop(){// Either reduce the width of the best block (front), or drop it.if(data.empty()){return;}if(data.front().width>1){data.front().width--;}else{data.pop_front();}}};
This solution is amortized \(O(1)\) for all operations, making it \(O(N)\) for
\(N\) elements. By using the standard library trees directly we cannot do
better than \(O(N \log N)\).
Sliding Maximum Example
Using 20 terms and a window of width 4, we have the following table:
This post is an update on a small side project I just started. It is called
BigWord and it aims to find the biggest word in a dictionary based on a
multiset of letters.
Sample Usage
For instance, it finds out that the biggest English words made with the letters
from “linuxwars” are “urinals” and “insular” and that the biggest English words
made with “applepies” are “pappies” and “applies”.
The existing code is written using C++14 and should run under any system which
has reasonable C++14 compiler. The program relies on the public domain word list
shipped with Fedora, but you may use any plain text dictionary you want.
It loads all words which may be constructed with the provided letter multiset,
ignoring words which are too big in order to improve performance, and then
traverses the word vector from longest to shortest trying to match as many as
possible of the biggest subsets of the provided multiset.
Even though it is performant enough for a fairly big word list, I think there
must be a more efficient way to find the biggest subsets. Currently, the first
abovementioned example finishes after 200 ms on a list of 479,000 words.
Ideally, I would like even the worst-case queries to finish in less than 100 ms
for word lists with less than one million words, as suggested in
Usability Engineering
by Jakob Nielsen.
Some possible improvements are dumping the processed word vector after the first
execution and reusing it in the next executions and sorting the input text file
by word length so IO can be stopped earlier, but these possibilities add
undesired complexity to the setup the user would need.
The following code snippet shows how letter counts are currently compared.
staticboolis_contained(constLetterCount&a,constLetterCount&b){if(a.letter_count>b.letter_count){returnfalse;}size_tremaining=b.letter_count-a.letter_count;for(size_ti=0;i<alphabet_size;i++){if(a.counters[i]>b.counters[i]){returnfalse;}// By catching excessive differences we may fail early.constsize_tdifference=b.counters[i]-a.counters[i];if(difference>remaining){returnfalse;}remaining-=difference;}returntrue;}
There may be a faster way to do it, but I am failing to spot it.
You can find the project source code (licensed under the ISC license) at the
GitHub repository.