y = @(x) (x+1).*(0<x & x < 1) + (2*x+1).*(1<=x & x<2);
This will work for vector and array x as well. However, this version of the function returns 0 for locations outside the defined range instead of returning nan. If you want it to return nan for locations out of range then:
y = @(x) (x+1).*(0<x & x < 1) + (2*x+1).*(1<=x & x<2) + (x > 0 & x < 2)./(x > 0 & x < 2) - 1;
This is, of course, not obvious as to how it works. It works because for values for which (x > 0 & x < 2) is false, the expression returns 0, so the (x > 0 & x < 2)./(x > 0 & x < 2) becomes 0/0 which is nan, and nan - 1 is still nan. Whereas for values which are in range, (x > 0 & x < 2) returns 1, and 1/1 is 1, and 1-1 is 0, so the calculation from the earlier part of the expression is unchanged.
The expression will return nan for +/- inf; expressions with this basic construction of multiplying a value by logical 0 to ignore that value always fail for _/- inf because inf*0 is nan rather than 0.
Note that this kind of piecewise expression is not compatible with any of the minimizers in the Optimization Toolbox, all of which require continuous first derivatives for the expression. This kind of piecewise expression can only be used with some of the functions in the Global Optimization toolbox, such as ga() or particleswarm() or patternsearch(), as those functions are derivative-free.
If you were thinking of using this with fmincon, then you are using the wrong approach. For any of the minimizers that estimate derivatives internally (or require explicit derivatives), you should break up your problem at each place where the derivative is not continuous, and minimize each section separately, and then take the best over the sections.