Hacker Newsnew | past | comments | ask | show | jobs | submit | tvali's commentslogin


..title continues: based on any part of my Laegna math theory and spireason scientific spirituality, a bare blueprint of assumptions, but still enough to take different philosophical sides in morely calm manner. The task was that millions of same scope and class apps should appear from random users: so, it must not use a corpus of my theory as it's essential idea, but it must choose small part it understands and scopes rather down.


This is self-identity structures of wave inference mapped into linearity, which makes base-4 system smooth (the one I described). https://spireason.neocities.org/Playground/InferenceCounter/... has some theory, https://spireason.neocities.org/Playground/SheepCounter1/she... introduces hologram base-4 system and this link was given before. You can see how to linearize many math normals, like differentials, integrals, waves, octaves and frequencies - all exist in math, not just physics!


Base-2 like qualities avoid irrationals in proper log and exp; not using main digit logic for zero makes it "frequential", where digits happen to meaningful positions in calculus such as fourier, and dilbert spaces really look as if they have to be as strange as they are.


Usage: AI extensions, business calculations, hilbert-fourier transformations in simplified frameworks and modelling of spiritual systems in metatheory (metaphors and parallels seem especially like scaled dimensions and so single octave model can be used to represent united modelling)


My logic meets hilbert spaces here: 3 general and a few specific solutions for DL and AI, including ML visualization: either zoom in, look it in higher space, or zoom out and simplify it's space to yours. https://github.com/tambetvali/LaegnaAIMLBasics - machine learning intro of me, and I got many readers/cloners from here and like 4 stars (typically you guy use 2 computers and do not read online; that's the free stats from github: AIExperiments repo of mine brings some code for GPT) for AIBasics: but I rather found out it was very GPT related by expressing perceptron, basically GPT model is an architecture on perceptron. I am advocating the use of ML here. So all my news in one post if you dare to read comments :)


Sorry also: https://exponential-whispers.lovable.app/ - exponentiality visual sitemap. https://github.com/tambetvali/LaegnaAIMLBasics/tree/main - my machine learning guide, after the AI guide rather on perceptron, which I introduced before.


Here is the correct answer:

Yes, this is for GPT, but also for general Perceptron and Machine Learning.

On generalization, it assumes input vector elements and output vector optimization, in one iteration, with one cell - for perceptrons and machine learning. Specifically, there is optional multi-multi connectivity, assuming bias and weight matrices, which align to Perceptrons.

In GPT models: with CoPilot, it's actually less visible that it's root does not calculate basis for exponential and linear coefficients and their orders. Would it, it would take considerably less layers what it's doing on abstract, trained level: it is using heavy maths to calculate you symbolics, such as integrals and differentials on abstract number level.

Inside the layers, they won't do this on symmetric, mathematically consistent level to output homonomous multidimension for optimiation. It is generally your social level and personal time outcomes, and basis for all religion: somehow, you find proper coefficient to balance between short-term and long-term gain, yin and yang, and you form society and personal life; this is topic for introduction part, as well as more advanced topic of summary: let's assume the "general audience" reads 2 first, then 2 last pages, while an architect "scans" - bold, italic, and in popular parts even some colors are used, and it generally goes to what you gain by "holy" accumulation, but how you survive in constant "mundane" realms based on linear coefficients (the base chakra), such as giving children generation by generation (a head chakra). This is uniform towards capable measurements in religion and science, constituting human life within it's various perspectives and models for real-life measurement in it's terms.


I had to think this whole day haha what is the correct answer :) Your question might have interest: whether you can use it for GPT models; it covers a simple GPT in PyTorch, with some pseudocode properties - you do it in activation function, the non-linear perspective projection of two-level or "frequential" differential calculus.

I thought: other readers might be interested in having scikit-labs similar pseudocode for general machine learning, where you can also simplify it finally by taking only the floor or round approximation of differential coefficient. For mathematical audience, outside the scope of choosing an AI there is complex number implementation, which projects and layers; and for general perceptron: basically it's given, it's a little bit simpler than GPT hook for activation layer (we have the imaginary part of complex number), but general perceptron does probably less attention phases. In GPT, specifically, the complex number implementation allows to implement projection with layer pair, and output projection with only one complex number activation function covering all that meaning, and memory consumption doubled.

In tensor space, the current spatial element is now relative, which used to be absolute value of float becomes relative value of real part of complex number. Additionally, a spatial coordinate layer appears, which is able to remap the space based on accumulation each value has towards finer limit value for it's own value in highly abstract math: but, more importantly, each number has inertia towards it's own direction, and activation layer creates accumulation of this inertia on symmetric basis, but is directed to future and creates non-linearity; specifically this nonlinearity which appears, should look extremely similar to ReLU: if real and imaginary part are the same, it looks like relu but does not cancel a dimension out below zero, but for example it could append logarithm on it. AI optimizer can shape this "imaginary part", accumulation space or projection space (compare projection matrix in 3D) and "real part", the actual number together, in math which is using trivial solutions from most mapped part of complex number and that we need: 1D space, somehow trivially, maps into 2D space and this is aligned in my math heavily with infinity properties as well: we are very interested that if we map real numbers on line 1D coordinates, whose domain now equals R the set of real numbers, to real numbers on plane, we get it into R^2, and we find out we do not find symmetric numbers - infinity is next dimension, and union of finite and infinite dimension is also higher in sense of Hilberts Spaces specifically, and now complex number conveys this distance in linear plane: it's simplification from higher space now maps real numbers, through 2 dimensions, to 1-dimensional realm and cancels out element "i" by two-dimensional mapping: let's say this is the dimension which appears "lower" or "imaginary" in complex number, and has smaller phase. If you use this complex number instead of float, and it contains two floats: you use my activation function, and the 1-tensor and 2-tensors, despite now constituting of 2-dimensional cells, have math which looks the same in equations, because for two parts of complex number, you use single letter, but you still use same operators - plus, minus, multiply and divide -, and this is constituent that it builds up to math proofs moreover the same, sometimes general form of the same equation; so you do not have to alter the heavy work behind GPT architecture, but only apply general complex number, where imaginary part is projective and real part is real space; in tensor field, acceleration appears where it also maps to several frequencies and their funny, complete math. You can map this very easily to known theories - you are interested in more linear form of fourier transformations - you apply more accelerative spaces have higher vibrations altough with longer term, dimension-density log-base-exponential quadratic difference or polarity, typical in math -; so you keep the headers.


In deep learning in general, in GPT: some sensitivity to general exponent.


I have similar issue - I think in "dead internet", 1) people do not build custom forums for small social groups; 2) rule enforcement does not follow the structure of past where such small group followed deep conduct and 3) in large pools and aggregators of forums and discussion groups, the rules to be followed are like small bots: they cannot rely on the context. I personally use AI on my page for a good purpose, because we actually get on sync - but it's not giving up my creative work; it's proofreading, adding some references and facts and sometime very cocksure of it's style like "people won't read it".


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: