Change "matrix" to a scalar

21 views (last 30 days)
José Rodrigo Corrales Díaz de la Vega
Answered: Star Strider on 13 Sep 2021
I have this code
function put=Montecarlo(s0,k,r,sigma,t,n)
sum0=0;
sum1=0;
var=0;
for i=1:n
st=s0*exp((r-1/2*sigma^2)*t+sigma*randn(0,1)*sqrt(t));
h=max(k-st,0);
sum1=sum0+h/i;
if i>1
var=var*(1-(1/(i-1)))+i*(sum1-sum0)^2;
end
sum1=sum0;
end
put=exp(-r*t)*sum1
desviacion=sqrt(var)/sqrt(n)
intervalo=[put-1.96*desviacion,put+1.96*desviacion]
...
When I try to run, matlab thinks that sum0 and sum1 are matrices.
put(50,52,0.06,0.12,0.5,100)
Error using ^ (line 51)
Incorrect dimensions for raising a matrix to a power. Check that the matrix is square and the power is a scalar. To
perform elementwise matrix powers, use '.^'.
Error in put (line 10)
var=var*(1-(1/(i-1)))+i*(sum1-sum0)^2;
How can I solve this problem?

Answers (1)

Star Strider
Star Strider on 13 Sep 2021
If you want to square the elements of ‘var’ use element-wise exponentiation (the ‘dot operator’).
var=var*(1-(1/(i-1)))+i*(sum1-sum0).^2;
See Array vs. Matrix Operations for details.
If you then want ‘var’ to be a scalar, sum it:
var=var*sum((1-(1/(i-1)))+i*(sum1-sum0).^2);
.

Products


Release

R2021a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!