In this case, "i" is a type suffix meaning "int". 5u would be an unsigned int; 5i32 would be a 32-bit int; 5f64 would be a double precision float, and so forth.
Yes it could be implied if left out. Due to type inference.
`5` would be signed or unsigned if left off. Depends on how you use it.
So you could try.
use std::num::abs;
let v = vec!['a', 'b', 'c', 'd'];
let x = 3;
let c = v[x]; // v.index (operator overload) constrains x to type uint.
let y = abs(x); // error: failed to find an implementation of trait core::num::Signed for uint.
// abs can't constain the type to an int because abs can take anything that
// implements `Signed` (eg BigInt)
If you wanted to make sure that x was signed.
let v = vec!['a', 'b', 'c', 'd'];
let x = 3i;
let y = abs(x); // Ok as x is signed.
let c = v[x]; // error: mismatched types: expected `uint` but found `int`
If you really wanted to index to try an index with an int you can use a checked cast.
let casted = x.to_uint(); // Returns None if x is negative or Some(x) if it's postive.
Also a generic int doesn't have to be int or uint, it can be any integer value: signed: i8, i16, i32, i64 or int. Unsigned: u8, u16, u32, u64 or uint.
AFAICT they only need to be written like that if you want to assign them to an implicitly typed variable or you can use the alternate explicitly-typed form (which the guide also shows):
let x: int = 5;
As far as i being an unfortunate suffix, I can't think of a better one when you consider there is already a pattern to such suffixes (being the first letter of the type) carried over from C/C++, et al.
I suspect mathematician programmers who are already used to * being multiply and ^ being bitwise xor (instead of, say, an exponent operation) can probably learn to deal with it, or just avoid it by avoiding implicit typing for ints.
That's an option. I think for me personally the implication that the number is somehow pluralized is more potentially confusing than the idea that it is irrational, but YMMV.