Author Archives: steves

Dlang PSA: Don’t use -release

Don’t use the -release command line switch for the D compiler.

Why? Because it removes bounds checks from arrays in @system code. Why is this a problem? It is a problem because the #1 problem with all exploits in the world is buffer overflows — writing or reading data that you are not supposed to have access to.

In other words, if you have a bug in your code where you don’t validate your array usage is within bounds, then bound checks will prevent a catastrophic error, or an exploit. If you are lucky, you get a segmentation fault that crashes your program.

Note that -release doesn’t even optimize the code! You still need to use -O -inline to get maximum performance. If you are feeling a bit adventurous, you might use -check=assert=off, but that’s only if you really have expensive asserts that are causing performance problems. Even then, I might look into selectively compiling some modules with asserts off to achieve the desired performance.

In general, turning off safety checks is only crucial for performance critical code. It should not be project-wide. And for bounds checks? You can easily omit bounds checks in @system code by using the .ptr[index] mechanism.

What about dub?

For those who use dub, you can actually override the release build option. Here is how I do it (dub.json format):

"buildTypes": {
    "release": {
        "buildOptions": [
            "inline",
            "optimize"
        ]
    }
}

This means, when you type dub -b release you won’t accidentally remove the most important checks present in your code.

But I want my code to be the fastest ever!!!

No, you don’t. You don’t care if it takes 200ms vs 250ms. Trust me. Just don’t do it.

Here is a case where D beat pretty much all the competition, and never turned off bounds checks: https://github.com/jinyus/related_post_gen

New iopipe Abstraction – SegmentedPipe

For those who know my library iopipe, it has been pretty stagnant as of late. However, the other day, I needed to process a file line-by-line, along with N lines of context. Now, I do have an example in the iopipe library that shows how to do N lines of context, but it does this via a separately maintained list of line references. Such a thing is a pain to keep track of, and ironically works just like a buffer in iopipe does.

So I thought, “what if I make an iopipe which is a pipe of lines, where each element is another line from the source pipe?” Element 0 is the first line in the buffer, element 1 is the next one, etc.

What needs to be stored for this? If I store slices of the underlying window, those can change when data is released, which means reconstructing everything upon release (not ideal). If I store offsets into the source window, those also can change, but it’s probably more manageable. Instead of re-constructing all the lines, I can subtract the number of bytes removed from the source chain from each element. Each position in my “offset buffer” is a number from 0 to source.window.size, and will be in increasing order. Then when I fetch an “element” from the pipe, it will slice the data out of source.window based on these endpoints.

But that still means extra work on release, and it also invalidates any windows stored elsewhere (some of this can’t be helped). However, there’s a simple solution to this: if the first offset in the list is treated as the “origin”, then we can slice based on that being 0! As a bonus, it also tells us the position within the entire stream (since starting the line pipe) for each line.

So I went about writing this, and I couldn’t believe how simple it was! I can copy it all here, since it’s pretty short (I’m using shortened methods here to save some space):

struct SegmentedPipe(SourceChain, Allocator = GCNoPointerAllocator) {
    private {
        SourceChain source;
        AllocatedBuffer!(size_t, Allocator, 16) buffer;
    }

    private this(SourceChain source) {
        this.source = source;
        auto nelems = buffer.extend(1);
        assert(nelems == 1);
        buffer.window[0] = 0; // initialize with an offset of 0.
    }

    mixin implementValve!source;

    // the "range"
    static struct Window {
        private {
            SegmentedPipe *owner;
            size_t[] offsets; // all the offsets of each of the segments
        }

        // standard random-access-range fare
        auto front() => this[0];
        auto back() => this[$-1];
        bool empty() => offsets.length < 2; // needs at least 2 offsets to properly slice
        void popFront() => offsets.popFront;
        void popBack() => offsets.popBack;
        size_t length() => offsets.length - 1;
        alias opDollar = length;

        auto opIndex(size_t idx) {
            immutable base = owner.buffer.window[0]; // first offset is always the front
            return owner.source.window[offsets[idx] - base .. offsets[idx + 1] - base];
        }
    }

    Window window() => Window(&this, buffer.window);

    size_t extend(size_t elements) {
        // ensure we can get a new element
        if(buffer.extend(1) == 0)
            return 0; // can't get any more buffer space!
        // always going to extend the source chain with 0, and give us a new segment
        auto baseElems = source.extend(0);
        if(baseElems == 0) {
            // no new data
            buffer.releaseBack(1);
            return 0;
        }
        buffer.window[$-1] = buffer.window[$-2] + baseElems;
        return 1;
    }

    void release(size_t elements) {
        source.release(buffer.window[elements] - buffer.window[0]);
        buffer.releaseFront(elements);
    }
}

// factory
auto segmentedPipe(Chain, Allocator = GCNoPointerAllocator)(Chain base) {
    return SegmentedPipe!(Chain, Allocator)(base);
}

For those not familiar with iopipe, the eponymous concept is similar to a range, but is essentially a sliding window of elements. extend gets more elements, window gives the current elements (as a random access range), and release forgets the front N elements from the window. In this way, you can completely control the buffer, and don’t have to allocate your own buffer for things.

You might notice the comment “needs 2 elements”, that’s because we always need 2 offsets to slice an element. Now, I could special case e.g. the last element so I don’t have to store that one, but the code is so much nicer with a sentinel instead.

So how do we use it to get lines? What we need is an iopipe that extends one line at a time. That’s exactly what iopipe.textpipe.byLine does. The code looks like this:

        auto lines = File(filename, mode!"r").refCounted
            .bufd // buffered
            .assumeText // assume it's utf8
            .byLine // extend one line at a time
            .segmentedPipe; // store lines in a buffer

And I was kind of shocked when this built and worked the first time. You know an abstraction is good when it writes easy, reads easy, everything is a simple composition of existing API, and it just works!

Expect this to be in iopipe soon. I want some more features here, like I’d like to be able to get the offset from each element, and allow some way to store more information from the underlying pipe/process. I think I might replace jsoniopipe’s JsonTokenizer with a JsonTokenPipe, and build things on top of that (i.e. validator, skip, etc). That actually would supersede the awkward cache system. Maybe I can get rid of the awkwardness of getting the string data too? One can only dream…

Spelunking Attribute Inference in D

Inference of attributes is a huge part of D programming. D has admittedly quite a lot of at­trib­utes, and four categories of attributes are related to functions:

  • memory safety – Includes @safe, @system, and @trusted.
  • pure – functional purity means that a function cannot access shared or global data.
  • nothrow – Whether a function can throw an Exception (note this does not include Error or other Throwable derivatives, see my other post on this).
  • @nogc – Functions marked with this cannot allocate memory from the GC. This includes hidden allocations the compiler might insert.

This post isn’t really about those attributes, and if you want to learn more about them, I recommend reading the D language specification, and searching for information about them on the D official blog (I wrote a post myself on writing @trusted code).

What I want to talk about here is attribute inference. Because of the proliferation of these different attributes, and because D is a very generative-heavy programming language (templates, CTFE, etc), it can be quite awkward to properly attribute some functions. D’s solution to this is to infer attributes based on the code being compiled. This is limited to functions that the compiler knows it must always have the source code available in order to use. These include:

  • auto returning functions
  • template functions
  • functions inside a template
  • functions inside another function
  • lambda functions

Notably missing here are regular functions. Why? Because a function can be declared separately from the definition via a function prototype. Also of note: class member functions, even if inside a class template, will not be inferred. Since non-template class member functions are virtual, those functions must be explicitly attributed.

So what happens when attributes are wrong? The answer is that the compiler tells you something like the error from this code:

void foo() {
}
void main() @nogc {
    foo();
}
Error: `@nogc` function `D main` cannot call non-@nogc function `foo`

This is the error message from trying to call an incorrectly marked function foo. This is easy to figure out and correct — just put @nogc on foo and call it a day.

But what happens when the function that’s incorrectly marked is hidden behind an inferred function?

void foo() {
}
void bar(alias f)() {
    f();
}
void main() @nogc {
    bar!foo();
}
Error: `@nogc` function `D main` cannot call non-@nogc function `onlineapp.bar!(foo).bar`

No mention here of the real problem: foo is not marked @nogc. All we have is a reference to bar!foo. Now, this also isn’t too difficult to figure out, but this is also not the worst case. When inference failure happens, sometimes there are several layers to the problem. The function that needs attribution might be buried under 10 levels of templates, and maybe in those inside a static foreach, making it hard to figure out what, exactly, is causing the inference to do what it did.

So how do you find the problem? You do it by digging down through each layer until it becomes clear which part the compiler has seen that causes the inference to fail.

I’m going to pick one attribute — @nogc — and show how it works for that. But realistically, all of them can be done the same.

Technique 1: Explicitly mark the template

It’s not usually a good idea to mark a template with an attribute that can be inferred. Especially the attribute @trusted. But in this case, it is a temporary situation, where we want the compiler to dig a bit lower. You mark the template, and then when you solve the complete problem, you unmark it. To remind myself, I usually comment out the original line of code, and put a TODO: marker in there to remind me to remove it later.

If we mark our template above, we get a better error message:

void foo() {
}
void bar(alias f)() @nogc {
    f();
}
void main() @nogc {
    bar!foo();
}
Error: `@nogc` function `onlineapp.bar!(foo).bar` cannot call non-@nogc function `onlineapp.foo`

Nice! now we have the error that shows us the real problem — foo is not marked. Just mark foo, verify that it compiles, and remove the extra attribute from bar, done!

Technique 2: Copy and Rewrite

The problem with the first technique is that it sometimes adds failures for the rest of your code. What if we have something like this?

void foo() {
}
void bar(alias f)() @nogc {
    f();
}
void main() @nogc {
    bar!foo();
}

int x;
void allocateit() {
    x = new int(42); // actually uses GC
}
void otherFunc() { // not @nogc
    bar!allocateit();
}

Now we get two errors — If we are lucky! And in the case of allocateit, it really isn’t @nogc, so the marking of bar isn’t valid. In this case, we only want to mark bar as @nogc if the f parameter is @nogc. This is the main point of inference!

To fix this, we need to copy bar, add the expected attribute, and use the copy only when we are making the problematic call.

void bar(alias f)() { // leave this one alone
    f();
}
void bar2(alias f)() @nogc { // copy and add attribute 
    f();
}
void main() @nogc {
    bar2!foo(); // we get the correct error here
}
void otherFunc() { // not @nogc
    bar!allocateit(); // now this succeeds
}

In this way, we have isolated the path of the compiler for this one case, because this is the case we are interested in. We leave all other cases alone. In a large application where a template might be used in many places, this technique is essential.

Technique 3: Use static if

static if can help us make different decisions based compile-time data. Let’s say, for instance, the offending call is done inside a static loop. Maybe the template succeeds in being @nogc for some parameters, but not for others. Whether you use the normal path or the special attributed path has to depend on compile-time data detectible on the parameter.

This can be tricky, and there’s no “right” way to do this. It highly depends on what the “thing” is that triggers the error. I sometimes use type names, sometimes I use is expressions, sometimes I use __traits(compiles), etc. Whatever you use, single out the path you want to test, and make a specialized case for that one call.

void complicated(Args...)() {
    static foreach(T; Args) {
        static if(is(T == int)) bar2!T(); // specialized attributed path
        else bar!T(); // regular path
    }
}

Doing a full dig

Now that we’ve seen these techniques, how do we apply them to a real nasty 10-layer problem? In that case, we peel the rotten onion all the way to the core (likely caused by your missing attribute). Use whichever technique is appropriate at the next layer, and then repeat the sequence. Always look at the most inner inferred attribute function. Eventually, you will get to the answer.

This can be troublesome, since you may not control much of the code that is involved. Some of it may even be in D’s standard library! But don’t be afraid to (temporarily) modify your copy — none of the issues that might arise from doing this matter until the compilation succeeds. And at that point, you undo all the instrumentation.

Sometimes, I make a complete copy of the code, or just re-install the package once I’m done finding the problem. Don’t be afraid to take things apart, just remember which screws went to which parts!

Recursive instantiation inference failure

Sometimes, if a template is determined to depend on itself in certain way, the compiler gives up inference, and just assumes the worst case. An example:

auto forward(alias fn, Args...)(Args args) {
    return fn(args);
}
T factorial(T)(T val) {
    if(val == 1)
        return val;
    return forward!factorial(val - 1) * val;
}
void main() @nogc {
    auto x = factorial(5);
}
Error: `@nogc` function `D main` cannot call non-@nogc function `onlineapp.factorial!int.factorial`

Using technique 1, you can add @nogc to factorial, and it actually just compiles!

Unfortunately, there is no simple fix here. You can mark factorial explicitly @nogc, but this means that if some T value uses the GC, it can’t be used with factorial. These can sometimes be the hardest to diagnose, since normal techniques do not work.

I’ve seen different approaches to this, including using introspection to apply explicit attributes (which is not an easy thing to do). It may involve simply dictating to users the required attributes, and if you don’t use them, you don’t get to use the library.

I would like to see the compiler just become smarter about this. I believe that it could try compiling with the most restrictive attributes, and it should work most of the time. There might be some pathological cases that prevent inference, but just giving up is worse.

Great changes on the horizon!

In a recent version of the compiler (version 2.101.0), @safe inference has been instrumented so that when an inference results in failed compilation, the compiler does a lot of this work for you! Let’s take our original example, and replace @nogc with @safe (and compile with 2.101.0 or later)

void foo() {
}
void bar(alias f)() {
    f();
}
void main() @safe {
    bar!foo();
}
Error: `@safe` function `D main` cannot call `@system` function `testsafe.bar!(foo).bar`
       which calls `testsafe.foo`

That second error message is saying that the call to foo itself is actually what makes that instantiation of bar unsafe. We no longer have to instrument bar! Imagine that this is a call chain that is 7 layers deep. Having the compiler explain each layer without having to instrument it is going to save a lot of time.

Unfortunately, this is only for @safe code, and not for any of the other 3 attributes. Hopefully these improvements will be mimicked for all attributes, and instrumenting code will be a thing of the past!

But until then, hopefully this post helps you find some of these nasty inference bugs without too much hair-pulling!

The Cost of Compile Time in D

When I was creating my presentation for dconf online 2022, I was looking at alternatives to building constraints. If you watched my talk, you can see the fruit of that experiment in my strawman library (which is very much a proof-of-concept, and not ready for real use).

But it got me thinking — how much more expensive are these strawman constraints than the current Phobos range constraints? But even before I went that far, I started looking at some of the phobos constraints, and realized even there, we can achieve some savings.

Consider the constraint for isInputRange:

enum bool isInputRange(R) =
    is(typeof(R.init) == R)
    && is(ReturnType!((R r) => r.empty) == bool)
    && (is(typeof((return ref R r) => r.front)) ||
        is(typeof(ref (return ref R r) => r.front)))
    && !is(ReturnType!((R r) => r.front) == void)
    && is(typeof((R r) => r.popFront));

Let’s focus on one aspect of this, the use of the ReturnType template. What does that do? Essentially, it takes the parameter (in this case a lambda function) and evaluates to the return type of the callable.

But…. we have that as part of the language, don’t we? Yeah, it’s called typeof. typeof gives you the “type of” an expression. And it’s a direct link into the compiler’s semantic analysis — no additional semantic computation is needed.

To see what we are comparing against, let’s take a look at the ReturnType template (and its dependencies):

template ReturnType(alias func)
if (isCallable!func)
{
    static if (is(FunctionTypeOf!func R == return))
        alias ReturnType = R;
    else
        static assert(0, "argument has no return type");
}

template FunctionTypeOf(alias func)
if (isCallable!func)
{
    static if ( (is(typeof(& func) Fsym : Fsym*) && is(Fsym == function)) || is(typeof(& func) Fsym == delegate))
    {
        alias FunctionTypeOf = Fsym; // HIT: (nested) function symbol
    }
    else static if (is(typeof(& func.opCall) Fobj == delegate) || is(typeof(& func.opCall!()) Fobj == delegate))
    {
        alias FunctionTypeOf = Fobj; // HIT: callable object
    }
    else static if (
            (is(typeof(& func.opCall) Ftyp : Ftyp*) && is(Ftyp == function)) ||
            (is(typeof(& func.opCall!()) Ftyp : Ftyp*) && is(Ftyp == function))
        )
    {
        alias FunctionTypeOf = Ftyp; // HIT: callable type
    }
    else static if (is(func T) || is(typeof(func) T))
    {
        static if (is(T == function))
            alias FunctionTypeOf = T;    // HIT: function
        else static if (is(T Fptr : Fptr*) && is(Fptr == function))
            alias FunctionTypeOf = Fptr; // HIT: function pointer
        else static if (is(T Fdlg == delegate))
            alias FunctionTypeOf = Fdlg; // HIT: delegate
        else
            static assert(0);
    }
    else
        static assert(0);
}

template isCallable(alias callable)
{
    // 20 lines of code
}

template isSomeFunction(alias T)
{
    // 15 lines of code
}

Whoa, that’s a lot of code to tell me what the type of something is! Why is it so complex? The reason is because in order to determine the return type of something, we have to use the typeof primitive, but this needs a valid expression. For a callable, that means we need a valid set of parameters. All of that needs to be introspected by the library, which is simply given a symbol and doesn’t know anything about that symbol without context.

However we have context! We know exactly how to call the lambda function we have constructed, with an R! Why do we need this complexity for something that should be a simple call? As most well-versed in writing generic library code know, this is not an easy thing to do (sometimes generic types can’t be easily constructed, or you might have issues with disabled copying, etc.). In addition, ReturnType is built to handle all sorts of callable things, not just lambda functions.

But isInputRange doesn’t actually need to construct, or even have a valid R for generating the expression, all it needs is an already existing R to call methods on it. We can do this using a reinterpret cast of null to an R* and now we have an “already made” R. Yes, this would crash if actually run, but we don’t ever need to run it, we just need to get its type! And so, here is an equivalent isInputRange template that does not use ReturnType:

enum isInputRange(R) =
    is(typeof(R.init) == R)
    && is(typeof(() { return (*cast(R*)null).empty; }()) == bool)
    && (is(typeof((return ref R r) => r.front)) ||
        is(typeof(ref (return ref R r) => r.front)))
    && !is(typeof(() { return (*cast(R*)null).front; }()) == void)
    && is(typeof((R r) => r.popFront));

The difference here is we have a no-argument lambda, and so we don’t have to rely on library tricks or introspection to know how to call it (and as you can see, we call it with no parameters as expected).

Measuring the results

Given an isInputRange template that is completely independent of std.traits, what is the result? How much does it save?

To test this, I wrote a program generator that created 10000 identical but independently named input ranges, that are tested like this:

struct S0 { int front; void popFront() {}; bool empty = false; }
static assert(isInputRange!S0);
struct S1 { int front; void popFront() {}; bool empty = false; }
static assert(isInputRange!S1);
...
struct S9999 { int front; void popFront() {}; bool empty = false; }
static assert(isInputRange!S9999);

Running on my Linux system, using DMD 2.101.2, I get the following results:

COMMANDTIMEMEMORY USAGE
dmd -version=usePhobos2.75s1.755G
dmd -version=useTypeof1.47s621M

Looking at the savings, it’s quite significant — almost 50% time savings, and over 65% memory savings. Note that each call to ReturnType is unique, and so it will execute its own semantic analysis. Using the compiler’s -vtemplates switch, we can see that using the current Phobos adds quite a few dependent templates. For each usage of isInputRange, we see:

  • 2 distinct instantiations of ReturnType
  • 4 instantiations of isCallable (2 distinct)
  • 2 distinct instantiations of FunctionTypeOf
  • 2 distinct instantiations of isSomeFunction

All that adds up to an additional 8 distinct template instantiations, and 10 total instantiations. A distinct template instantiation will run semantic analysis, but a non-distinct one will just find the existing template in the symbol table and return it.

Using the measurement numbers we can somewhat extrapolate that each ReturnType instantiation adds 64 microseconds, and consumes 56.7K of RAM. The RAM consumption comes from storing the additional template instantiation symbols in the symbol table.

Conclusion

Such small savings, why is it important? It’s important because this is a perfect example of “death by 1000 paper cuts”. Each little template instantiation gives us a bit of convenience, but adds a tiny cost. These costs can add up significantly, and produce an overall compiler experience that is frustratingly slow, or worse, runs out of memory (yes, I have had this happen)! For something such as isInputRange, which almost nobody ever looks at or needs to, the cost is not well spent — especially considering how short and readable the alternative is!

When you reach for something in std.traits, consider what the compile-time cost might be, and don’t always assume that a small call will be efficient. Are you writing something people have to understand easily? If not, make the messy details as complex as needed to avoid such costs. If you can write the same thing using builtins, it will run faster, and it might even work better. I like to prefer compiler builtins such as typeof, is expressions and __traits to std.traits whenever possible, as long as the cognitive load of the resulting code isn’t too great (and yes, it can be).

I do plan to submit a PR to streamline everything I can about the range traits, maybe we can all pitch in and see where some of this interdependent fat can be trimmed all throughout Phobos!

How to Keep Using D1 Operator Overloads

D1 style operator overloads have been deprecated in D2 since version 2.088, released in 2019. Version 2.100, released last month, saw those operator overloads removed completely from the language. However, using D’s fabulous metaprogramming capability, it is possible to write a mixin template shim that will allow your D1 style operator overloads to keep working.

For sure, the best path forward is to switch to the new style of operator overloads. But there can be good reasons to keep using the old ones. Maybe you really love the simplicity of them. Maybe you use them already for virtual functions in classes, and don’t want to change. Maybe you just don’t want to do much code editing to an old project.

Whatever the reason, this post will show you how to do it easily and succinctly!

D1 Operator Overloads vs. D2 Operator Overloads

An operator overload is a way for a custom type to handle operators (e.g. + and -). In D1 these were handled using plain named functions, such as opAdd for addition or opMul for multiplication. For an example to work with, here is a struct type that uses an integer to represent its internal state:

struct S {
   int x;
   S opAdd(S other) {
      return S(x + other.x);
   }
   S opSub(S other) {
      return S(x - other.x);
   }
   S opMul(S other) {
      return S(x * other.x);
   }
   S opDiv(S other) {
      assert(other.x != 0, "divide by zero!");
      return S(x / other.x);
   }
}

void main() {
   S s1 = S(6);
   S s2 = S(3);
   assert(s1 + s2 == S( 9));
   assert(s1 - s2 == S( 3));
   assert(s1 * s2 == S(18));
   assert(s1 / s2 == S( 2));
}

Note how repetitive the operator code is! Plus, we only handled 4 operations. There are actually 11 math and bitwise binary (2-arg) operations that could be potentially overloaded for an integer. This doesn’t count unary operations (e.g. S s3 = -s1) or operations where S is on the right side of the op, with maybe an int on the left side (e.g. opAdd_r, opMul_r). If we needed to overload based on operand type, we could branch out into template functions, but that might not be that much less code.

D2 decided that a better way to handle bulk operations would be to use templates in order to handle operators. Instead of calling opAdd for + and opMul for *, it will call opBinary!"+" and opBinary!"*" respectively. This means we can handle all the operations in one function. To process them all, we can rewrite S like this:

struct S {
   int x;
   S opBinary(string op)(S other) {
      static if(op == "/" || op == "%")
         assert(other.x != 0, "divide by zero!");
      return mixin("S(x ", op, " other.x)");
   } 
}

void main() {
   S s1 = S(6);
   S s2 = S(3);
   assert( s1 + s2  == S( 9));
   assert( s1 - s2  == S( 3));
   assert( s1 * s2  == S(18));
   assert( s1 / s2  == S( 2));
   assert( s1 % s2  == S( 0));
   assert((s1 | s2) == S( 7));
   // and so on
}

Note how we not only have only one function (with a slight difference for the division operators), but we handle all math operations! The code is easier to write, less error prone, and less verbose.

Aliasing Operators

But what if you already have operators in D1 style, and you don’t want to change them, or merge them into one super-function?

D allows you to alias member functions to another symbol, and opBinary is no exception. Here is the original type, but with aliases for each of the operators:

struct S {
   int x;
   S opAdd(S other) {
      return S(x + other.x);
   }
   S opSub(S other) {
      return S(x - other.x);
   }
   S opMul(S other) {
      return S(x * other.x);
   }
   S opDiv(S other) {
      assert(other.x != 0, "divide by zero!");
      return S(x / other.x);
   }

   alias opBinary(op : "+") = opAdd;
   alias opBinary(op : "-") = opSub;
   alias opBinary(op : "*") = opMul;
   alias opBinary(op : "/") = opDiv;
}

Note that we are using a few cool features of D metaprogramming here. The aliases are eponymous templates which means I don’t have to write out the template long form, and we are using template parameter specialization to avoid having to use a single template and look for the covered operations inside the template, or having to use template constraints to filter out the operations we cover.

But we can do even better than this! Nobody wants to write this boilerplate code tailored to each type which may not all cover the same exact operators.

Mixin Templates

A mixin template is a template with a set of declarations in it. Wherever you mixin that template, it’s (almost) as if you typed all those declarations directly. Using the power of D’s compile-time introspection, it’s possible to handle every single possible operator overload that D1 could offer, by writing aliases to the D1 style operator overload, automatically.

In order to do this, we are going to have three rules. First is that we don’t care if the operators are properly written in D1 style. As long as the names match, we will forward to them. We also don’t need to worry about overloads based on the types or parameters accepted, as aliases are just name rewrites. Second, this mixin MUST be added at the end of the type, because otherwise, the entire type’s members may not have been analyzed by the compiler (this may change in a future version of D). Third, D does not allow overloads between the mixed-in functions and regular functions — the regular functions will take precedence. So you cannot define any D2 style operators of a specific name (e.g. opBinary). If you want D2 operators, convert the whole thing, don’t use some D1 and some D2.

Let’s write just the opAdd declaration in a mixin template, and see how it works.

mixin template D1Ops() {
   static if(__traits(hasMember, typeof(this), "opAdd"))
      alias opBinary(op : "+") = opAdd;
}

There’s a lot of meta code in here, I’ll explain it all.

The mixin template declaration is telling the compiler that this is a template specifically for mixins. Technically, you can use any template for mixins, but declaring it a mixin template requires that it’s only used in that way.

If you don’t know what static if is, I highly recommend reading a tutorial on D metaprogramming, as it’s essential for almost every metaprogramming task. Needless to say, the contained code is only included if the condition is true.

__traits(hasMember, T, "opAdd") is a specialized condition that is true only if the specified type T (in this case, the type of the struct the mixin is being added to) contains a member having the name opAdd.

And finally, the alias is as we wrote before.

Now, how would we use this inside our type?

struct S {
   int x;
   S opAdd(S other) {
      return S(x + other.x);
   }
   S opSub(S other) {
      return S(x - other.x);
   }
   S opMul(S other) {
      return S(x * other.x);
   }
   S opDiv(S other) {
      assert(other.x != 0, "divide by zero!");
      return S(x / other.x);
   }

   mixin D1Ops;
}

That’s it! Now opAdd is hooked via the aliased opBinary instead of via the D1 operator overload. Therefore, S + S will compile on 2.100 and later. However, the other operator overloads will not.

Why do it this way? As we will see, using the static if allows us to mixin the template regardless of whether opAdd is present or not. Using this feature, we can handle every possible situation with regards to existing operator overloads.

Using the Full Power of D

Adding each and every operator overload to the mixin is going to be very repetitive. But there is no need to do this, D is a superpower in metaprogramming! All we need to do is lay out the operation mappings, and we can use another specialized metaprogramming feature, static foreach, to avoid having to repeat the same boilerplate over and over.

With this, we can handle every binary operation that the struct might have written D1 style:

mixin template D1Ops() {
   static foreach(op, d1;
     ["+" : "opAdd", "-" : "opSub", "*" : "opMul", "/" : "opDiv",
      "%" : "opMod"]) {
      static if(__traits(hasMember, typeof(this), d1))
         alias opBinary(string s : op) = mixin(d1);
   }
}

Let’s look at the new things we have added to the mixin template. The first thing is an associative array of string to string, indicating which ops should map to which D1 function names. static foreach is a feature which will, at compile time, loop over all the elements in a thing that normally you would iterate at runtime (in this case, the associative array). It’s as if you wrote all those things out directly one at a time, with the symbols op and d1 mapped to the keys and values of the associative array containing the operation mappings.

See how our static if has changed a bit, instead of using a string literal, we use the d1 symbol, which in the first loop is "opAdd", in the second loop is "opSub" and so on.

In addition, there is one minor change in the alias. Because we must alias the opBinary call to a symbol, and not a string, we must fetch the symbol based on its string name. mixin(d1) does this. This is a relatively new feature, in older compilers we could still achieve this with a single mixin statement for the whole alias statement, but just calling mixin on d1 is a lot cleaner looking.

With that, our final code looks like this:

mixin template D1Ops() {
   static foreach(op, d1;
     ["+" : "opAdd", "-" : "opSub", "*" : "opMul", "/" : "opDiv",
      "%" : "opMod"]) {
      static if(__traits(hasMember, typeof(this), d1))
         alias opBinary(string s : op) = mixin(d1);
   }
}

struct S {
   int x;
   S opAdd(S other) {
      return S(x + other.x);
   }
   S opSub(S other) {
      return S(x - other.x);
   }
   S opMul(S other) {
      return S(x * other.x);
   }
   S opDiv(S other) {
      assert(other.x != 0, "divide by zero!");
      return S(x / other.x);
   }

   mixin D1Ops;
}

void main() {
   S s1 = S(6);
   S s2 = S(3);
   assert( s1 + s2  == S( 9));
   assert( s1 - s2  == S( 3));
   assert( s1 * s2  == S(18));
   assert( s1 / s2  == S( 2));
}

You’ll notice that I intentionally included opMod in the mixin, even though our type does not have it. This demonstrates the power of the static if to only provide aliases if the appropriate D1 operator overload exists.

Filling it out

All that is left for opBinary is to fill out the mappings to handle any possible existing D1 binary operations. As long as you have a D1-style operator, the mixin will generate an alias to cover it.

And finally, any other D1 style operations as listed in the changelog, such as opUnary or opBinaryRight can also be covered by adding another loop. You could even nest the mappings if you wanted to, or include the name of the template to alias as part of the mapping. Or you might notice that all the opBinaryRight operators are the same as the opBinary operators (except in), and just do both at the same time.

You also might not using static foreach for this, and actually write them all out by hand, simply because static foreach is slightly expensive, and so is constructing an associative array at compile-time. Remember, once this template is done, there will never need to be any updates to it. The advantage of using a loop is you have to write a lot less code, which makes it a lot less error prone.

And if you aren’t in the mood to do it yourself, here is a gist mapping the entire suite of D1 operator overloads.

Comparing Exceptions and Errors in D

What are Exceptions and Errors in D? Why is there a difference? Why does D consider Errors throwable inside a nothrow function? Sometimes decisions seem arbitrary, but when you finally understand the reasoning, you can better appreciate why things are the way they are.

Throwing a Throwable

I’m not going to go into all the details of how throwing works in D or in any other language (that is easy to find online). But simply put, an exception is a “exceptional” case, which shouldn’t occur in normal code. The reason throwing is preferred to other types of error handling (such as returning an error code, or a combination error/value) is because an exception requires handling. If you don’t handle it, someone else will. And the default is to print out most of the state that was happening when the exception happened, and exit the program.

As a side note, the most recent compiler has a new feature called @mustuse which requires that any return value (which might contain an error) must be dealt with.

That being said, throwing is relatively expensive, meaning that you should only do it in truly exceptional cases, and not use it for mundane flow control.

In D, you throw an exception or error simply by using the throw statement, which requires an object instance that is a derivative of Throwable:

int div(int x, int y) {
    if(y == 0) throw new Exception("Divide by zero!");
    return x / y;
}

Then you can catch it somewhere else. When you catch it, the exception contains all the information on how it was generated, including the file/line that generated the exception and all the places along the call stack that got to that point. The beauty of exception handling is you can put the handler anywhere along the call stack — where it’s most needed.

Consider a web server, where you might want an exception handler at the part that handles the HTTP request, where you can return the appropriate HTTP error code, and maybe a nice page sent back to the user. Instead of having to propagate some error deep in your page handler code up the call stack so it can be properly handled, you just throw the exception where it happens, and catch it where you want to handle it. The language takes care of the rest!

Stack Unwinding

One of the bookkeeping tasks that the language has to deal with is unwinding the stack. If for instance you have structs on the stack with destructors, those destructors have to be called, or else your program integrity is compromised. Imagine if a reference-counted smart pointer didn’t decrement its reference when an exception is thrown. Or a mutex is left locked.

There’s also scope-guard statements which help properly design initialization/cleanup code without having to remember cleanup at the end of scopes, or in every spot where a return statement exists. Those must also be run when an exception is thrown.

nothrow functions

A nothrow function is one that cannot let Exceptions escape handling outside the function. That means you must handle all possible exceptions that might be thrown inside your function or inside any throwing function you call. A nothrow function’s purpose is to inform the compiler that it can omit cleanup code for exception throwing.

This allows the compiler to both output less code, and also gives the optimizer more possibilities to work with, making nothrow functions preferable to ones that throw.

Stack Unwinding for Errors

However, a nothrow function is still allowed to throw an Error. How does that work?

How it works is that the compiler still omits exception cleanup code, and the code that catches the Error is not allowed to continue the program. If it does, the program may obviously be in an invalid state. You can think of the throw and catch of an Error to be like a plain goto instruction.

The following code example and output demonstrates how cleanup code is skipped:

// For example:
void foo() nothrow {
   throw new Error("catch me!");
}

void bar() nothrow {
   import core.stdc.stdio;
   scope(exit) printf("cleaning up...\n");
   foo();
}

void main() {
   bar();
}
object.Error@(0): catch me!
----------------
./onlineapp.d:3 nothrow void onlineapp.foo() [0x55db91086345]
./onlineapp.d:9 nothrow void onlineapp.bar() [0x55db91086350]
./onlineapp.d:13 _Dmain [0x55db9108636c]

It can be tempting to catch an Error and use that as a control flow mechanism. For example, an array out of bounds access is a frequent error that you may want to just recover from. But the stack frames may not be properly cleaned up, which means things like mutex unlocks, or reference decrements didn’t happen along the way up the stack.

In short, your program is in an undetermined state. Continuing execution risks damaging the data used by the program, or crashing the user’s application.

How to handle Errors

Don’t. The only exception (pun intended) is when you are testing code. And actually the language guarantees proper stack unwinding for assert errors thrown inside unittests and contracts.

As a rule of thumb, an Error is for programming errors (that is, conditions you expect to be enforced by the programmer are incorrect), and an Exception is for environment/user errors.

If you do catch an Error, the only proper action is to perform some possible final action (such as logging the error) and exiting the program. And make sure any final actions you perform can’t be thwarted by undetermined state.

Edit: More Pitfalls!

After much discussion on the D forum, one user (frame) noted that you can return from a scope(failure) statement.

I didn’t go over exactly what a scope guard statement was, but essentially there are 3 conditions that you can use to run cleanup code, exit, success, and failure. I used the scope(exit) code above to show an example of skipping cleanup code.

A scope(failure) statement executes when a function is exiting because a Throwable is thrown. However, an Error is a derivative of Throwable, so this includes Error! Normally, this isn’t a problem, because after the statement is done, the code normally just rethrows the Throwable. However, you are allowed (per the spec) to simply return normally, use goto to exit the statement, or throw an Exception. Any of these mechanisms will mask the fact that an Error was thrown, and that the program is now in a possibly invalid state.

I recommend at this point NOT to use these mechanisms, and I have advocated on an existing dlang issue that the language revoke this allowance.

So what if you want to return a code if an Exception is thrown? Well, the compiler actually rewrites a scope(failure) statement like:

// scope(failure) <code>; // is rewritten as
try {
   ... // all code after the scope(failure) statement
} catch(Throwable _caught) {
   <code>
   throw _caught;
}

Instead, you can expand the statement and change the Throwable to an Exception to make sure you aren’t inadvertently masking an Error from propagating:

try {
   ... // normal function code
} catch(Exception) {
   return 10;
}

Golf Blitz is the Most Incredible Game that I Will Never Play Again

OK, so it is possible that I may at some point play it again. But this will require some drastic changes that I don’t see happening. Let me start off by saying that I was a beta tester for this game, downloaded it the first day it was available, and have been hooked for the last 5 months. It is one of the funnest and most entertaining games I’ve ever played. More recently it’s been feeling like a bad relationship where you love someone but there’s that one nagging thing that you keep trying to overcome but just can’t get past. It’s hard to break up with someone or something you love and have invested time in, but the frustration has been building up so much recently that I just can’t continue.

But before I continue my farewell letter, I need to acknowledge that Golf Blitz has some amazingly good qualities that make it one hell of a good game.

What is Golf Blitz?

Golf Blitz is a new game from Noodlecake Studios, Inc., makers of the famed Super Stickman Golf ® series. Super Stickman Golf 3 may be my favorite all-time mobile game. My brother and I would (and mostly still do) play race matches against each other every day at lunchtime. The mechanics are fabulous, the levels interesting and well-designed, and the endless variety of hats, powerups, and game modes keep the game reasonably fresh. By this point, I have unlocked every single thing in SSG3 that is available, and generally don’t play much except for the matches against my brother.

Golf Blitz takes one aspect of SSG3, the race mode, and adds a new twist — ball collisions. This, along with a new quick-paced shooting mechanic, has made for super-entertaining game play. In addition, because there was rampant cheating in SSG3, the folks at Noodlecake redesigned the entire physics engine so that the game is run both on your device and on the server simultaneously. If there ever is a discrepancy, the server is authoritative. Even if someone hacks her client, the server should prevent anyone from cheating.

Noodlecake has done a great job making a multiplayer game that can be played across continents that appears to be super-responsive and have no lag. Indeed, as long as you have a decent connection with at most 300ms lag, you will not notice any differences or lagging (but when you do have a crappy connection, be prepared for some weird stuff). Put simply, the online experience is incredibly tight.

If you want to get a good idea of gameplay, you can take a look at my youtube channel where I’ve uploaded some game highlights and different tournament matches from my team.

Climbing the Ladder

The main play mode for this game is to enter your golfer into a “sit-and-go” style match. Your golfer’s abilities and self worth are all measured using a zero-sum system of “trophies”. Every match, you win or lose trophies to other players according to who finishes first, second, third, or fourth, and how the players ranked before they were matched. The matchmaking system tries to match people who are close in trophies, and does a decent job putting like-skilled players together. As you sit waiting for someone in your trophy range to enter, your window of how many trophies difference you can match against grows. At the time of this writing, the top players were in the 2600-2800 trophy range, and the amount you win or lose is roughly 16 trophies per person. This means in a 4 player match, you might win or lose up to 50 trophies, sometimes more. I won’t go into all the details here, as there is a lot that can be analyzed, but needless to say, the game is all about getting trophies, and climbing the leaderboard ladder.

In addition to trophies, you get packs. Each time you gain trophies in a match (1 trophy will do), you get a pack to unlock. Higher quality packs give you more, but also take longer to unlock. Which pack you earn is random. The packs contain bux (the in-game currency), cards for unlocking cosmetics, and cards for upgrading powerups. In essence, the way you progress in this game is to play ladder matches (matches where trophies are on the line). There are also “friendly” matches where you play people on your team, and nobody loses or gains trophies, but allows you to practice or construct your own tournament style games. These do not earn you any packs or other rewards.

Finally, every 12 hours you are allowed to earn a “star pack”. A star is earned every time you finish a hole in 1st place. You can earn up to 3 stars per match, and after you get 10 stars, your star pack can be opened. Star packs are worth quite a bit more than the standard pack, and they contain a different kind of currency — gems. The only way you will level up at a reasonable rate is to gain star packs. So your incentive to play at least 4 games a day is strong.

Skills

The main progression of this game is to upgrade your golfer’s “skills”, or stats. Every time you level up (either by unlocking cosmetic outfits or hats, or upgrading your powerups), you get to add one skill point to one of four categories. The four categories are as follows:

  • Cooldown: This is how long it takes for your golfer to be able to shoot again after your ball comes to a stop.
  • Power: How much power your ball has when you hit it. More power means more distance.
  • Speed: How fast your ball will travel. In essence, the physics engine is “sped up”, making your ball go faster than it normally would.
  • Accuracy: Higher accuracy means the spread of your possible shots gets narrower.

In addition to the 4 categories, each powerup ball also has skill categories just like the above (except cooldown), which get to be upgraded every time you level up a ball. The maximum upgrade level for each skill is 12 (well, there are 13 levels if you upgrade them all to 12, but that is well beyond the reach of anyone not dumping tons of actual cash into the game).

The most successful players use a similar strategy to upgrade their skills. The first category to upgrade is cooldown. This is because cooldown affects every shot, not just the regular shots. In addition, 2 balls coming to rest together usually reset around the same time. Due to the ball collisions, balls on the green must compete against each other to see who gets into the hole first. In this scenario, shorter cooldown is essential, as you will be able to blast the other ball away before he blasts you.

The second skill generally upgraded is either speed or power. Having fully upgraded power allows you to save strokes, flying over obstacles that take un-upgraded players 2 shots to overcome. Having fully upgraded speed means you can outrun your opponents, leaving them behind. Top players upgrade these in different ways, but around level 34, you see most people have fully upgraded cooldown, speed, and power.

What about Accuracy? Surely top players need to be accurate to win? We’ll get to that in a minute.

The Good Parts

I want to say that I did have quite a bit of fun playing this game. The powerups are awesome (grenade ball is my favorite), and the level design is superb. There are so many close matches, so many times where I would blast an opponent off, or I would get blasted off, and I find myself laughing and enjoying myself, even when I lost.

The team system is great. I was on one of the best teams in the game (PrestigeWrldWyd), and ran the team tournaments. We did this by playing friendlies in a bracket-style tournament format, and I have to say this was my favorite part of the game, even though it’s not an official game feature. We played best of 3 matches, and usually the person who won was the better player, and even when they weren’t, the player who did win did so with some amazing skills and clutch shots. The ball collisions and competitive nature make this game super-unpredictable, and very fun. You have to think on your toes when you get into unfamiliar situations, and make sure you have stock of which powerups should be used when. There are mind games to be played (If I putt now, he’ll just blast me. But if I putt short, knowing he’s going to try and push me over the hole, he’ll push me in).

As with most games or competitions, playing the game enough gives you a good idea of how you rank (i.e. how many trophies you should have). As long as you stay around that range, moving up or down 100 or so trophies seems satisfying. And for the most part, that described me. I would play for my star pack every day and finish roughly 10 or 20 trophies away from where I started. When I’d level up, a bump in my average trophies was expected.

But occasionally, I would lose every single match I played, and gaining a star pack meant losing 150 to 300 trophies that would take a few days to earn back. This was the most frustrating part. My blood would boil. I looked into phone insurance plans. I would finish my star pack and feel a sense of loss and disappointment that made it hard to justify the last 30 minutes of playing.

And it wouldn’t just be losses. It would be losses where I had outplayed my opponent for 2 holes, going up 2-0, only to lose the next 3 because the game decided to screw up some of my shots.

I started to dread earning star packs. I would take breaks from the game for a weekend. It did help, but eventually the same stuff would happen. And when it did, I could not help but thinking, “this isn’t working, it’s not fun, why am I wasting my time playing this game?”

The Biggest Mistake

Remember the accuracy skill? This is the source of all the problems with this game. I’m going to analyze this for the next few sections, because I don’t think many people realize just how bad this aspect of the game is, and it deserves a good pondering and thumping.

First of all, nobody in the top tier of players has upgraded accuracy significantly. I had to go down to the 41st player on the leaderboard to find someone who upgraded accuracy significantly beyond level 3, who didn’t already fully upgrade their other stats. If accuracy were a stat that gave players an advantage, then you would think that more top players would have accuracy upgraded. This in itself isn’t exactly proof, but it is a good indicator that accuracy doesn’t win you games.

You also don’t get to actually see any advantage of accuracy from other players. If you play against another player with high accuracy, it looks no different than playing against someone with low accuracy. You don’t get to see the other golfer’s preview, so accuracy is invisible to opponents. For the most part, low accuracy can be overcome by maximizing the power on your shots. Due to the way the accuracy is implemented, the stronger the shot, the more accurate the beginning part of the shot is. This means, you don’t need full accuracy to make it into small slots 90% of the time, or hit small targets.

Let’s take some examples from the portal land course. Here is a shot that aims for a tiny portal on the wall, when done with a targeted shot, vs. a full power shot (this is with 8 power and 3 accuracy):

Targeted Aim
Full Strength Aim

Note how the full strength aim negates almost all ill effect from having low accuracy. It turns out also, that aiming full strength is generally better. It is a game of speed, after all, and the faster you get there, the better.

Here’s another example, also from portals. Here, I’m lobbing onto a small green porch, where if I miss the hole, I’ll have to wait for cooldown and then putt again, giving my opponents plenty of time to get in first. However, if I take a full shot against the ceiling, the accuracy is pretty much indistinguishable from the sniper ball (almost completely accurate) for the contact point, and can be faster to the hole to boot.

Lobbing is super-inaccurate
But blasting against the ceiling is cool!

Every time you upgrade accuracy, it means another point cannot go into power or speed (cooldown is a given, everyone should always upgrade cooldown to 12 immediately). When you play against people with more power or speed than you, it is crystal clear that they have an advantage and are winning the match because of that. If you spend points on accuracy, you will not beat those people, even if they are inaccurate. An inaccurate shot that goes over a hump that you can’t clear will beat you every time. A slow accurate shot meandering down to the green just to watch the other guy putt in before your golfer shows up isn’t very satisfying. Yes, you can upgrade all 4 categories to maximum. But this will take either an insane amount of time (things get really expensive bux-wise as you level up), or actual dollars. And I’m talking hundreds or thousands. This just isn’t going to happen for me.

In addition, upgrading your accuracy shrinks your range towards the middle of the shot preview. This means that if you need power or height, increasing accuracy will actually reduce the chance that you will succeed on those shots. The famous example is from shipyard, the hole with the big pole you can hit over with enough power. With full accuracy, your chances of going over that pole are actually reduced.

When my shot doesn’t go where I’m aiming, it does not feel like it is me missing the shot because I didn’t upgrade accuracy, it feels more like the game is picking the other player to win. It feels like a slot machine instead of a skill-based game. Even when you make shots and see the other player mess up, did they really mess up? Or did the game just pick you this time? Once you are in the upper tier of players, one missed shot means a loss. And when that is decided for you by the game, it feels like someone stole the match from you. This is not a good feeling. The answer has been “well, just upgrade accuracy instead.” This basically just means forfeiting wins against equally skilled players who have chosen speed or power over accuracy. Neither route is appealing. Imagine a sporting game where the refs would occasionally trip one of the players. The means they used to decide when to do it were not disclosed, but “totally random.” How fair does that seem? What if the same player got tripped 3 times in a row (yes, random numbers can do that)?

What does “Fully Accurate” Look Like?

One of the largest problems with accuracy is, as you upgrade accuracy, it doesn’t make enough of a difference! Because I was quitting the game, I spent 500 gems that I had accumulated to respec my golfer for full accuracy. I took a series of before/after screenshots to show what 3 accuracy (I had early on in my post-beta career spent 2 points on accuracy) and 8 power looks like vs. 12 accuracy and 12 power. Below are the results, along with an analysis of each situation.

3 Accuracy
12 Accuracy
Sniper ball (1 accuracy)

Above you can see the same hole (one of the most annoying holes of the game, IMO) with 3 different trajectories. The first is with 3 accuracy, which has you possibly hitting the second island. But add full accuracy, and you still can’t guarantee a hit on the pink spot (which honestly, aside from miracle swishes is the only way you win this hole). Only with sniper ball accuracy (BTW, this is with NO upgrades to sniper ball accuracy) can you hit the target you need.

3 Accuracy
12 Accuracy

Above is the same shot from portal land into the small portal, but I’ve added the 12 accuracy into the mix. Is this worth the 9 extra skill points? You will make the small portal about 90% of the time with 3 accuracy. But having lesser speed than your similarly leveled opponents will cost you.

Risky Shot
Sure-fire Shot

Surely accuracy helps with the short-porch green right? Wrong. You can see above you cannot guarantee even with full accuracy that you will hit your target how you expect, especially when lobbing. In fact, I took the shot shown, and it bounced over the hole onto the other side. The second shot I took off the ceiling went in exactly as expected, because I can predict exactly the trajectory (the accuracy of the early part of the shot is way way better, plus a higher angle of attack increases accuracy).

Pipe Dream

I’m going to throw this out there, even though I fully expect it to be ignored by the developers. But there is one sure-fire way to fix the accuracy problem. That would be to replace accuracy with something else. I have always suggested a better spec would be shot preview distance. In SSG3, you had the red toadstool hat, which showed a short preview, and a green toadstool hat that showed a longer preview. This kind of mechanism would not take away from the skills of the players, not let the game decide who wins and loses, and would provide a useful, tangible, and obvious thing to upgrade. Some people may be really good at following a short rendering of a ballistic trajectory to its destination, but most people will see the benefit from the sniper ball (which should have a nearly full shot preview).

As I said, I don’t expect this to happen. It would be such a drastic change to the game, that I don’t expect them to even entertain doing this. But it’s good as a thought exercise to get people considering alternatives to fix this problem. If this dream came true, I would be back playing this game tomorrow.

The further advantage is that people will start paying more attention to this upgradable stat, vs. the one they ignore completely today.

Reasonable Improvements

What about some tweaks? I can think of a few that might help. First, the shot selection within the preview is supposedly a uniform distribution. Put simply, you have an equal chance of shooting the minimum shot as you do the very center. Noodlecake could change this to a more centered distribution. In other words, make it more likely you shoot where you are aiming than not. I don’t expect this to fix the problems with accuracy, or make it more desirable for upgrading, but at least it would lessen the “Black Magic F**kery” (as one of my teammates puts it) that this game inflicts on you. Maybe that would be enough to lessen the frustration, I don’t know.

Another possibility is to improve the improvements. As you upgrade accuracy, tweak the spread so it’s more satisfying. When you spend a skill point that otherwise could have gone to power or speed, it’s disheartening to see it didn’t make much of a difference.

It’s Been Fun

Well, for the most part anyway. I’ve enjoyed the community, and Noodlecake has been awesomely responsive to problems and feedback. I can’t say enough about my teammates, that has been the only thing keeping me in this game for so long, PWW forever! I hope Noodlecake succeeds, and I know they have a new mode coming out (challenge mode). It will not draw me back, as I expect the main mechanism for progressing is still going to be the ladder matches, but who knows? I wish I could say that the last 5 months have been more fun than stress, but really the day I decided I was not going to play this game again was one of the most refreshing days I’ve had in a while. Can’t argue with that feeling, regardless of the reasoning.

How to Report a Bug to Microsoft

So you found a bug in one of Microsoft’s great works of software art. What do you do now? I realized that there isn’t a nice step-by-step guide online, so I will give you the low-down on how this process works. It’s not straightforward — Microsoft is a large company, and has probably thousands of calls or reports each day from people who don’t know how to open the File menu. Unfortunately, for those of us in the software development industry, there isn’t a quick or easy way to report actual bugs.

The Bug

My bug that I found has to do with Excel 2016. At my company, we have many spreadsheets that use a feature in Excel called “Web Queries“. These allow one to download a web page, or a table that is on the web page, into cells in your excel document.

In my particular case, I am using this feature to connect our internal job tracking system that I developed to spreadsheets that are used for calculating pricing and energy savings (our company makes energy savings updates to refrigeration systems), and upload that result back to the tracking system.

In Excel 2010, and Excel 2013, this works well. However, I have discovered recently that in Excel 2016 (at least for Office 365 version 16.0.8067.2115) the web query downloads the entire web page and not the selected table. This means lots of extra data is coming into the spreadsheet that is not expected, overwriting other cells in the sheet, and causing many things not to work.

Step 1: Search for Existing Reports, Reduce Use Case

The Internet is great for sharing misery of issues with applications that have known limitations. Look for any help on Excel, there’s usually a boatload of hits on google. However, in this case, I found surprisingly little online, maybe a few that could be related, but people have worked around it with VBA macros, and other things. I don’t have that luxury, since the sheer number of files I would have to update makes this very time consuming.

Any good software engineer knows, before reporting a bug to the organization that owns the software, the most helpful thing you can do is reduce it to the minimal case. In my case it is easy. I created an html page that had one table on it:

Using excel, I created a new document in 2016, and added the web query. The way this is done is to select the “From Web” button on the Data ribbon:

This will bring up a web browser (it’s actually Internet Explorer), allowing you to select both the page, and any sub-table you wish. Here I’ve pointed at the aforementioned web page, and selected the single table for import:

Finally, click the Import button at the bottom, and you get the error I was talking about:

The correct result should be to import only the table data, and not the “before” or “after” text. Here is how it looks on Office 2010:

Now that we have a very reduced, and obvious bug, we can proceed to the next level, which is contacting Microsoft directly.

Level 1: Microsoft Chatbot

Microsoft has a chat bot that will try to help you with answers to questions using some advanced AI. It doesn’t take long to defeat this foe, as this is your standard search of the knowledge base, and you can simply say “No, this doesn’t answer my question”. Eventually, it will attach you to a live agent in the chat window.

Note to Reader: I did not capture any of these sessions as screenshots or logs, unfortunately, so I will describe as best I can the interactions between me and the support staff.

Level 2: Installation Support

Regardless of what you have told the chatbot, or how technical you sound, the first level of Live human support is an installation technician. These are the people who can service the “I can’t find the File menu” support requests. Understandably, these people have much automation at their disposal, and training in how to deal with these kinds of issues. You will get responses to your questions such as “I understand that you are having problems with your web queries in Excel. I’m so sorry for your experience, I’ll be asking you a series of questions to assist you in fixing this issue” and “I understand that you think this is a bug in Excel. I’m so sorry you feel that way, I’ll ask you a few more questions to assist you in fixing this issue.”

The chinks in the armor of these combatants are plain to see — they have buttons to click that repeat what you say, and eventually show the nature of their game. After a few button clicks and repetition, you get something along the lines of “Your problem needs to be handled by the next level of support. Unfortunately, this cannot be done over a chat session. The best option is to call …” and so on. Congratulations, you have graduated to the next world of the Microsoft Support Infrastructure! Here is where you now need to come out of that computer and into the real world with real people.

Level 3: Office 365 Installation Technician

After being on hold for about 5 minutes, you finally get to show them that you actually have technical experience, and explain the real issue to a living breathing person, not augmented with automated responses. The technician in this case is not trained really on the underlying technology of Excel or any other Microsoft products. Finding the File menu is just about all they know. It doesn’t take long to KO this Glass Joe, and move on to the next level. What you are looking for here is “I’m not trained in this area, so I’m going to escalate you to Level 2 technical support.” Note that at Microsoft, even though we are on Level 3, the technicians are trained to attempt deception at every turn. Saying you were really only on Level 1 is a clever use of psychology to try and dampen your spirits. But cheer up, we will prevail!

Level 4: “Level 2” Technician

At this point, you are on hold, and your phone call is stretching past 30 minutes. This is another technique employed by the technicians. Usually you will hear some great news about Windows 10 on the hold line, “now that it is released”, and some of the amazing things you can accomplish. The message here is clear “Microsoft is huge, and you are just one lowly Office 365 licensee. You really should hang up if you know what’s good for you.” I know you have things to do, and it is tempting to give up, but patience is a virtue, and Windows 10 is pretty amazing ((When compared to Windows Vista or Windows 8)), so don’t give up yet!

Finally when you achieve a connection to the L2T, you will finally have the ear of a technical person. “Finally,” you think, “someone who will understand this issue.” This person will use a web applet to actually take over your Windows computer and allow you to show him exactly the issue. You use your minimal test case to prove beyond a doubt that the issue is definitely caused by Office 2016.

The truth is, this person does understand your issue, but his job is not to help you report it. His job is to find loopholes — Loopholes that allow Microsoft to defeat you and send you back to the File menu-finding engineers.

L2T: Can I have your microsoft id?

Me: Yes, it’s redacted@redacted.com

L2T: Let me look up your information. It shows here you have Office 365 Home. We are actually a support office for Commercial users of our product. We have a special division of support that can help you with your Office 365 issues. Let me connect you.

Level 5: The Deflection Specialist

At this point, you have been on the phone for about 45 minutes, and the impatience is growing inside you. You have suffered your first “defeat”, but really it was a victory. Again remember, Microsoft’s deception training is very prevalent in all their support staff. Even though it seems you have been shoved back to the File-menu engineers, you have actually been passed to a special department of Office 365 support called the “Deflection Department”. This department’s job is basically to lie. They make up any excuse or technical jargon to try and get you off the phone.

After you explain your bug ((Note: this isn’t verbatim, but I kid you not, this was the gist of our conversation)):

Deflection Specialist: OK, so you have a file that works with 2010, but not in 2016? Was this file created in Office 2010?

Me: Well, actually it’s probably older than that…

DS: Oh, sir, unfortunately, the software was completely rewritten in 2013, so files created before that version aren’t compatible with later versions.

Me: OK, so what is the upgrade path? How do I make the file work for 2016?

DS: You can’t, you need to recreate the file in 2016.

Me: But I have a brand new file I created with 2016, and it has the same issue.

DS: How many versions of office do you have installed on this computer?

Me: Well, I just installed 2016, but 2010 still is on there.

DS: That is probably the issue, it isn’t supported to run more than one version of Excel on the same system.

Me: But other users who only have 2016 also have this issue.

DS: By the way I wanted to let you know, this issue you are having is being widely reported right now, and our team is working on it. I suggest you wait 48 hours and see if any updates come out to fix the issue.

Me: OK, so you are telling me that many people are calling in to Microsoft to report this Web Query Issue?

DS: Do you know the exact version of office you are running? Here, let me log into your computer to see.

[At this point, the Deflection Specialist logs into my computer by having me download the same app that the Level 2 Technician had me download.]

DS: Yes, see, you have both office 2010 and 2016 installed, let me remove that.

At this point, she removes Office 2010. Note that I actually am running Windows 10 on a VM inside my Mac, and have created a snapshot from before I installed 2016 to work on this problem, so I’m not objecting at all to her removing the only working version of Excel from my system. It is wise to have a back up plan for when you start this whole bug reporting process.

The secret to defeating the Deflection Specialist is to keep talking. Keep asking questions, keep insisting that your problem is not solved, and that she needs to pay attention to the obvious bug. At this point, your call is over an hour, and irritation is noticeable in your voice. This is a good thing, she can sense this, and will then say:

DS: I’m actually not trained on this part of Excel, so I’m going to send you to a Level 2 technician…

Me [Interrupting]: But is it a Commercial Level 2 technician? Because I’ve already spent over 1 hour on the phone, and I’ve talked to Office 365 technicians and Level 2 commercial technicians, and they keep sending me back and forth.

DS: No sir, I assure you, this is a department that specializes in Office 365 issues, and will be able to solve your problem.

After the final lie, the DS has given up, and sends you to the next level.

Level 6: “Level 2” Technician, World Circuit

This enemy is just like the original Level 2, except she gets right to the chase. You don’t get to her unless you have an actual provable bug in the product, so she doesn’t waste any time going through the motions. She still asks for the details of your bug, but then immediately asks for your authorization:

L2T: Do you have a case number?

Me: I probably should by now, I’ve been on the phone for 70 minutes, and have been passed back and forth between level 1 and level 2 support, each claiming that they can’t help me.

L2T: So sorry to hear that. Are you using Office 365 Home edition, or Business.

Me: I’m pretty sure it doesn’t matter

L2T: I need to know which version, as we service commercial…

Me: The last person I spoke with said that you would be someone that was able to understand and fix this issue. Is there anyone at Microsoft that understands Office?

L2T: Why don’t you give me your Live ID and I can look up your product version.

[A few minutes later]

Me: I’ve been sent to them THREE TIMES, and they keep sending me back to your department, I don’t think they can help me.

L2T: Sorry sir, we only service Commercial customers here, and they will be able to help you with this…

Me [Interrupting]: You can’t keep sending me over there, I want to make sure that this person knows how to deal with this bug.

L2T: I will first conference this person in, and make sure they understand the issue before letting you go, is that OK?

Me: OK, we can try that.

If you have made it this far, congratulations! Your veins may be about to burst out of your forehead, and your polite manner has probably disintegrated into terse annoyance, but the paydirt is about to come. You will now face the final challenge in the Microsoft Support Infrastructure…

Level 7: The Link Engineer

Not going to sugar-coat this, this is the final stop in your phone call. I’ll just tell it like it happened:

L2T: I have confirmed that the technician understands your issue, and will be able to help, so are you OK with me dropping off the line?

Me: Yes, thank you. [L2T hangs up]

Link Engineer: Hi, can you please describe your problem

[Repeat same description]

LE: Thank you for telling me that. Unfortunately, our office is not trained to handle these types of questions, so I’m going to give you a link to go to…

Me: Are you freaking kidding me? Does ANYONE at Microsoft know how Office even freaking works???!!! I’ve been on the phone for 80 MINUTES, and you are going to send me to a Knowledge Base article???!

LE: Sorry sir, but if you go to answers.microsoft.com…

Me: I just want to report this bug! It’s a bug, confirmed it is in Office 2016!!! Do you care about having bugs fixed in your product?!! Some answers page isn’t going to help me! I’m sick of this, you have no clue what you are talking about.

Now, hang up the phone. Don’t just hang it up. Slam it down so they know you are pissed.

Final Level: Write an Angry Blog Post

Yep, that’s it. Write one just like this one. If there’s anything Microsoft or large companies like them understand, it’s that bad PR on the Internet, especially when written in a sassy, clever fashion with references to old video games, will get people’s attention. This will persuade them to immediately start working on the bug “reported”, and it should be fixed by the next version ((disclaimer: I have no idea if this is going to work, but it really should)).

Some questions you may have, and I’ll answer them:

1. Do I have to go through the entire Microsoft Support Infrastructure gauntlet to write an angry post about my bug?

Yes you do. If you don’t have a long horrible story about phone support, it’s not as interesting, and will not gain any attention.

2. Did you know that newer versions of Excel have “Power Query” capability, and that the “Web Query” feature is pretty much obsolete?

Yes, I know that. The Web Query feature is obviously meant to be supported in 2016, as it’s in the UI. And Microsoft is famous for not breaking backwards compatibility. I also have several hundred spreadsheets that would have to be updated. Not going to do it.

3. Why do you have hundreds of spreadsheets? Why not just merge them into one maintainable spreadsheet where you could fix the problem in one place?

Because shut up.

4. What if I have paid for support from Microsoft?

Then you can report my bug for me directly. Why haven’t you? It’s really obvious and straightforward. Please?

A Final Note: I know that there are some good people working at Microsoft (with the exception of the DS), and I took some creative license in the snark describing this phone support. I really do hope they find and fix the issue, and don’t mind this ribbing.

UPDATE: Since this has been posted on HackerNews and reddit, some very nice folks at Microsoft have chimed in with some helpful tips (and all of them polite). First, I want to thank one in particular who reproduced my issue and filed a bug in their internal system. So yes, it works!

Second, here is a collection of ways that are probably better than the approach I took:

  • Use the feedback button in your Office 2016 product (I hadn’t seen this before, but looks like a way to submit a pretty detailed bug report).
  • Use office365.uservoice.com, for which Microsoft has many places to suggest features. The news from the insiders is that the developers patrol those regularly.
  • Even though I was in a rotten mood and hung up, the last technician (the Link Engineer) was trying to send me to answers.microsoft.com. Apparently, this is also recommended as a way to get in touch with developers, but I have to say from my experience seeing some of the conversations on there, they aren’t always that good.
  • If you are working with an open source Microsoft product, try to find it on github and submit an actual bug there (or even better submit a pull request to fix it).

Thank you everyone with tips, it was actually very good to see all the genuine sympathy and offers of help, especially from MS employees. Cheers!

Have your Voldemort types, and keep your disk space too!

A recent issue I discovered (and no doubt has been encountered before) is that using Voldemort types in D can result in insane symbol bloat. However, at DConf 2016, a presentation by Vladimir Panteleev gave me an idea to help solve the problem. This allows one to create a Voldemort type, but cuts out most of the template bloat that can impede your project.

Voldemort Wrappers

Voldemort wrappers are a way to create chain constructed types — types where you wrap one type in another type, but the construction of the wrapper is done via an Implicit Function Template Instantiation (IFTI) factory function. The type itself is defined inside the function, and so is not able to be named by an external entity (hence the term Voldemort). This is a very nice encapsulation, because the type doesn’t interfere with any other symbols, and all creation of the type itself is funneled through the approved factory function.

An example of a Voldemort Wrapper is the chain function from Phobos. chain takes 2 ranges with the same element type and makes a range that will traverse the first, and then the second, as if they were one range (for more info on ranges, I recommend reading Ali Çehreli’s chapter on the subject). The full chain function gives us lots of niceties, such as implementing all the common features between the two ranges. However, for demonstration purposes, we will write an inputChain function that only works on like-typed input ranges:

Now, we can write a simple test that chains together ranges without any allocation!

And the result:

$ ./testchain
hello, world!

This is all pretty straightforward stuff, and isn’t groundbreaking. But what is hidden from you here is the alarming space-cost for Voldemort wrapper types.

Exponential Symbols

Let’s print out the name of the nameless type (yes, it does still have a name, even though you can’t access it). This is a bit tricky, because simply printing typeof(ch).stringof results in the name Chain. However, this isn’t what we want, what we want is the fully qualified and instantiated type name. The easiest way to get this is to create an exception with the type name in it:

The result of running this with our previous main file is a stack trace that starts with:

$ ./testchain
object.Exception@simplechain2.d(13)
----------------
4   testchain                           0x00000001063efa24 pure @safe dchar simplechain.inputChain!(immutable(char)[], immutable(char)[]).inputChain(immutable(char)[], immutable(char)[]).Chain.front() + 144
...

Here is the Chain type in a “nicer” format (I have replaced immutable(char)[] with the more commonly known alias string):

simplechain.inputChain!(string, string).inputChain(string, string).Chain

Here, we can see that the type of ch isn’t just Chain, it contains the full signature of the function Chain comes from ((Note that yes, the mangled symbol name (the one actually stored in the object file) reflects all of these pieces. I’m using exceptions to print out the name because they are easier to read and understand, but the same problem exists with mangled names as well.)) . The reason you see inputChain twice, is because inputChain is a template function. There are two symbols, one for the template (denoted by the instantiation symbol ‘!‘), and one for the function itself, which we will cover later. While this in itself isn’t extremely troubling (and actually makes a lot of sense), the trouble becomes apparent when you try to chain 3 strings together (using UFCS):

Compiling and getting the exception, the type of ch is now:

simplechain.inputChain!(
    simplechain.inputChain!(string, string).inputChain(string, string).Chain,
    string)
 .inputChain(
    simplechain.inputChain!(string, string).inputChain(string, string).Chain,
    string)
.Chain

I’ve tried to use indents to show you the pieces of this. First, we have the template. The template takes two parameters (two different ranges in fact). The first template parameter is the resulting type of the first inputChain call (you should recognize this from before). Note that this contains not only the template instantation, but the full signature of the function call as well. The second parameter is simply another string. And we get the repeated information for the function parameters.

If you continue this pattern, perhaps with more inputChain calls tacked onto the end of the call (as one would do with range pipelines in Phobos), then you can see how this will get progressively worse. The first argument to each call is going to be a recursive expansion of each previous call. I believe the growth of the symbol name is on the order of <strong>O(2<sup>n</sup>)</strong>, meaning we have exponential growth. However, for name mangling, the expansion is <strong>O(3<sup>n</sup>)</strong>, because unshown here is the return type of each level of function.

Abandoning the Dark Lord

So with such growth, a small range pipeline of Voldemort wrappers can add up to megabyte-long symbol names. But notice that the type itself is dependent only on the template parameters, not the function parameters ((In D, there is such a thing as a nested struct. Such a struct can utilize the stack frame of the function itself, giving access to variables and other definitions inside the function)).

We can solve the problem by moving the struct outside the function itself, to be included in the module namespace. Make this a private struct, and repeat all the template paraphernalia, and we have a “solution”:

And the resulting type:

simplechain.Chain!(simplechain.Chain!(string, string).Chain, string).Chain

Not too bad as a name, and this solves the exponential growth. But we have lost all the niceties that make Voldemort types so attractive — avoiding namespace pollution, avoiding repeating template specification, and encapsulation. This solution leaves a lot to be desired.

Using eponymous templates

So let’s look at a better way, that allows us to keep the benefits of Voldemort types, but without the baggage. In D, all templated functions, enums, types, etc. are actually a short form of a special type of template called an eponymous template. When you compile inputChain, the compiler really treats it as something that looks like this:

An eponymous template function still works with IFTI, so it’s equivalent to the original. However, now we have access to a namespace that we didn’t have before — the space inside the template, but outside the function itself. As shown by Vladimir Panteleev’s DConf 2016 talk, access to this space is forbidden by the compiler to outside functions and types because it always resolves to the eponymously named member.

So let’s put our struct there:

And the resulting type:

simplechain.inputChain!(simplechain.inputChain!(string, string).Chain, string).Chain

Note that the Chain type is safely buried inside the template namespace, without providing access to any outside callers. If you used the above type name, you would get a compiler error.

I call this the Horcrux ((If you don’t get this, then you need to read more Harry Potter)) method. If we compare this to Voldemort, it’s pretty much on par with all the features, except Horcrux wrappers do not support access to the function call stack or any definitions inside the function (unless you move them into Horcrux space as well), and the declaration is a little clunky. However, you may have some advantages. For example, if you had overloaded functions that return the same type, they could both be in the same template, and share the type externally, making them even less repetitive than the equivalent Voldemorts. You could also put unit tests inside that would now have access to the structs directly.

There is some effort to fix the compiler to avoid creating such huge symbols, but until this happens, I will be splitting my functions Horcrux style.


Here is the Github Gist with all the code included in the article.

Import Changes in D 2.071 [Updated]

Note: This post has been updated on 8/29/2016 with new information on mixin template imports.

In the upcoming version of D, several changes have been made to the import system, including fixes for 2 of the oldest bugs in D history.

There’s bound to be a lot of confusion on this, so I wrote this to try and explain the rules, and the reasoning behind some of the changes. I’ll also explain how you can mitigate any issues you have in your code base.

Bugs 313 and 314

Links: 313 and 314

Description

Private imports are not supposed to infiltrate the modules they are imported in. If you import a, and a imports b privately, then you should not have any access to b‘s symbols. However, before this was fixed, you could access b symbols via the Fully Qualified Name. A FQN is where you list all packages, including subpackages, separated by dots, to access a symbol. For example std.stdio.writeln.

In addition, when importing a module using static, renamed, or selective imports, the imported symbols were incorrectly made public to importing modules.

An example:

With 2.070 and prior versions, compiling this works just fine. With 2.071 and above, you will get either a deprecation warning, or an error.

Note that the private qualifier is only for illustration. This is the default import protection for any imports.

For an example of how selective imports add public symbols:

With 2.070, this compiled just fine. However, printf is supposed to be a private symbol of module ex2_a. With 2.071 and above, this will trigger a deprecation warning. In the future, the code will trigger an error.

Selective imports and FQN

A combination of both 313 and 314 is when you use a selective import, and expect the Fully Qualified Name to also be imported. This is not what the selective import was supposed to do, it was only supposed to add the symbols requested.

An example:

In this example, std.stdio.writeln is not actually supposed to be imported, only write is supposed to be imported (and even the FQN std.stdio.write isn’t imported!). We have to import std.range, because otherwise this would not compile (ironically, the package std is not imported by the selective import unless there is another import of the FQN).

In 2.070, this produces no warning or error. In 2.071 and beyond, this will produce a deprecation warning, and eventually an error.

Fixing problematic code

In order to fix such code, you have to decide what was intended. If your code really was supposed to publicly import the symbols, prepend public to the import statement. This brings all the symbols imported into the namespace of the module, so any importing module also sees those symbols. In our example 2 above, this would mean adding public to the import statement in ex2_a.d

If the imported module was not supposed to publicly expose the symbols, then you need to fix all importing modules with this problem. In our example, this would mean adding import core.stdc.stdio; to the top of ex2_main.d.

In the case of accidentally exposing the FQN of symbols that were privately imported, this is typically an issue with the importing module, not the imported one. In this case, you need to add an import. In our example 1 case, this would mean adding an import for ex1_a module to ex1_main.d.

For example 3, you can achieve the original behavior by both selectively importing the symbol, and statically importing the module. Just add static import std.stdio; to your scoped imports. Alternatively, you can add writeln to the selectively imported symbols, and use the unqualified name instead of the FQN.

For an example of how Phobos was fixed for this problem (there were thousands of messages in every build with deprecation warnings), see the PR I created.

Bug 10378

Links: 10378 Pull Request

Description

Another import-related bug fix is to prevent unintentional hijacking of symbols inside a scoped import. Such imports are not at module level, and import inside a function or other scope. These imports are only valid within the scope. Prior to 2.071, such imports were equivalent to importing every symbol in the imported module into the namespace of that scope. This overrode any other symbol in that namespace, including local variables in outer scopes. An example:

In 2.070 and prior, the assert above used ex4_a‘s definition of foo, not the local variable. In 2.071 and beyond, the local foo has precedence. The precedence rules work like this:

  1. Any local symbols are examined first. This includes selective imports which are aliased into the local scope.
  2. Any module-level symbols are examined.
  3. Any symbols imported are examined, starting with the most derived scope imports, all the way to module-level imports.

Note that this may be a breaking change, as demonstrated by the example.

Why did we change this?

This was changed because any symbol added to an import can drastically affect any code that uses non-selective scoped imports, hijacking the symbol in ways that the author cannot predict. While there is still potential for hijacking, since scoped imports override any module-level or higher level scoped imports, at least symbols that are locally defined are not affected. These are the symbols under direct control of the author of the module, and they should always have precedence.

A common change to a module is to move imports inside the scope of functions or types that are the only users of that import. This helps avoid namespace pollution. However, given that local module functions had precedence over imported ones, but scoped imports would take precedence away, this move was not always what the user intended. For this reason, module functions now always have precedence over non-selective scoped imports.

Fixing problematic code

This one is a little more nuanced. It may be that you wished to override the local symbols! In this case, use a selective import. Selective imports alias the symbols selected into the local scope, overriding any other symbols defined at that point. In our example, if we expected foo to refer to ex4_a.foo, then we would use an import like this: import ex4_a: foo;

In addition, you can use the FQN instead of using the simple name. I would recommend using a static or renamed import in that case.

Imports from mixin templates ((Thanks to captaindet for bringing this issue to my attention))

Links: Forum discussion, issue 15925

Description

A somewhat controversial change with 2.071 is the effect mixins can have with imports. If you have a mixin template which imports a module, then use that template within a class or struct, the import is only considered while inside the mixin template. It is not considered when inside the class or struct. For example:

The previous version would allow the import to be considered where the mixin occurs. In order to have a mixin template add an imported symbol, you can selectively import the symbol. In this case, static import will not work:

Why did we change this?

The explanation seems to be that allowing such imports can create a form of hijacking. Since a class-level import would override a module-level import, a user may not realize that the mixed-in import is present, and therefore overriding a module-level import that is in the local module. The hijacking can come after the fact, in the imported module, without the user’s knowledge or any changes in his code.

Fixing problematic code

There isn’t a very easy way to rectify this problem. The only solution is to selectively import all the symbols you may need from that other module within the mixin.

Transitional Switches

The new version of the compiler comes with two new transitional switches that you can use to find or ignore these errors (note that these affect the mixin template imports as well):

-transition=checkimports: This switch will warn you if you have code that behaved differently prior to issue 10378 being fixed. Note that this may slow down compilation notably, hence it’s not the default ((Thanks to Dicebot for pointing this out))

-transition=import: This switch reverts behavior back to the import rules prior to 10378 being fixed. Only use this switch as a stop-gap measure until you can fix the code!

General Recommendations

Because importing external modules that are outside your control can lead to hijacking, I recommend never importing a module at a scoped level that isn’t selective, static, or renamed. This gives you full control over what invades your namespace. The compiler will protect you now a little bit better, but it’s always better to defend against namespace pollution from an uncontrolled module.

References

D programming language

D Import Spec

Issue 313

Issue 314

Issue 10378

issue 15925

D Compiler Download