Sag 00, 03 / Nov 4, 19 00:22 UTC
Okay, the mirrored moon project were a bit ideally. Well, actually, a lot(that doesn't means that we won't take any action on the future if we can), so… lets talk about something that is happening right now, the Iris Project, a brand new programming language. Short showcase that only shows ...
Okay, the mirrored moon project were a bit ideally. Well, actually, a lot(that doesn't means that we won't take any action on the future if we can), so… lets talk about something that is happening right now, the Iris Project, a brand new programming language. Short showcase that only shows the compilation of const global vars -> https://vimeo.com/357272053 Long showcase that shows the whole UI(we will make it more beauty, don't worry) and a bit of the features the language will have, unfortunately, it's in Spanish -> https://www.youtube.com/watch?v=eZWAON-Lg5A But… Why we did reconsidered to don't develop the C++ framework. Simple enough, C++ is deprecated. And no, don't get us wrong, we know the C++ language features have been evolving(C++11, C++14, C++17, C++20...) and the libraries/frameworks has also been in constant development. We say it's deprecated, because its low level features are odd, a lot(we can explain that further in the comments section if anyone have any doubts about how we ended up into that resolution). We try to see other languages that offer some solutions to the C++ problems, such Rust & Go, but Rust memory model didn't offer what we are looking for, and Go just has a lot of problems of design for us. So now we have the initial motivation to go for, but it's not a real reason to design an entire programming language and the tools that it implies. Why we would do that then? The answer is the language itself. We really loved each new piece of design that came to our mind. And we are gonna share that here, today, now. First, we decided to make the language totally network dependent. What does this means? And why would we make it? This concept implies that the code written by the client, would be sent to our server, hosted(TOTALLY) by us, being compiled, and send back the response to it(a binary or a retina map(similar to AST format from Clang/LLVM)). Obviously the data would be encrypted for both sides. The reasons to do that: - Multi-platform(i.e., compile from a Windows machine for a Linux) - Multi-architecture(i.e., we could compile the same code sent by the user for the different target architectures, lets say SSE2 and NEON, without any explicit intrinsics or SIMD instruction lossy, absolutely everything will be handled by the compiler in a perfect way by generating a new binary for each architecture specialization). - But wouldn't generating a binary for each different target architecture consume a lot of the client hard disk space? No, because the library/application will be offered by stream: ·Libraries: - The code will be formatted to Retina map, so further compilations will take shorter times. - The retina map will be stored at the server, and it returns an ID and the public, external methods symbol/arguments, so the Iris IDE will know how the exported methods are structured. - The ID and exported methods are stored at the client hard disk with the extension .retina, and it's imported at the Iris IDE importing project at anytime it's gonna be used. - Once we compile using that library, the ID will be sent, and the server will create a relationship between that ID and the library code formatted before to retina map, so the compiler knows we are linking against that. Application: - The code is being compiled for the different platforms/architectures on their respective ABI format, and stored at the server hard disk. - A retina file with(only) the ID that links against the location of the compiled application will be provided for the programmer client. - The end user must have installed Iris Manager, which sends to the server the input(the retina file with the ID), and the detection of the actual user OS/Arch capabilities, retrieves the most appropiate(if any) version of the compiled application. MORE FEATURES AT THE COMMENTS SECTION
Oph 12, 03 / Oct 19, 19 04:07 UTC
Asgardia plans to add in a near future, spaceships over the Earth orbit. The design for these spaceships is pretty big, bigger than any satellite around us, right now. So, how could we feed that monstrosity at the dark side of the Earth if the Sun's light doesn't show up ...
Asgardia plans to add in a near future, spaceships over the Earth orbit. The design for these spaceships is pretty big, bigger than any satellite around us, right now. So, how could we feed that monstrosity at the dark side of the Earth if the Sun's light doesn't show up there? Would the energy that we got from the far-far stars be enough? May be, but at Iris Technologies we don't think it's something to leave to any researching study, so we came up with an alternative solution, mirrors at the moon. Now we planned the problem, how that fancy solution would work? To answer that, we need to ask ourselves, which color the moon has at night? White/light gray! Here's the thing. Meanwhile our spaceship is at the darkside of the Earth, due to the Earth itself hiding us from the Sun, the moon isn't! Unfortunately, the moon surface material can't reflect a high percentage of the UV's rays it receives, but… Who said we would be using the moon surface? Affortunately, there are tons of UV-A reflective materials, and they are relatively cheap, so we could build a camp of mirrors with this reflective material. There we go! We have our ship being fed by our reflective mirrors, isn't it? Yes, but..., just for a second. Because our ship would be constantly moving, and that is the last step to achieve our goal. So what we will need now is first, to install a rotator on the base of the mirrors so they can rotate to reflect the UV rays effectively at the position of our ship, and next, we will need to get this position by installing a GPS to our ship that sends our(spaceship) actual coordinates to the mirrors(through an antenna), and use some math on the angles, to constantly rotate to the right axis the mirrors. Now we are really done, and we have our Project: Mirrored Moon completed.
Oph 11, 03 / Oct 18, 19 01:53 UTC
Hey! Here's Iris Technologies, and after a long time without any activity, we believe that Asgardia, as a formed nation(we believe it will happen eventually), need to produce its own top-quality software in terms of performance & functionality. So its time to move forward and develop our first C++ project(we ...
Hey! Here's Iris Technologies, and after a long time without any activity, we believe that Asgardia, as a formed nation(we believe it will happen eventually), need to produce its own top-quality software in terms of performance & functionality. So its time to move forward and develop our first C++ project(we believe C++ 20 is the appropiate language for it) in name of Asgardia and what it represents. It will be hosted at our CEO(Samuel Alonso) GitHub account, and its name will be Aesha. We decided to follow Asgardia phylosophy, so we will set the license as MIT, which is a pretty permisive License. I hope all of you enjoy this iniciative,and share the framework with everyone interested in. Aesha Project: https://github.com/samuelalonsorodriguez/Aesha Cheers, Iris Technologies
Tau 21, 03 / Apr 15, 19 04:03 UTC
That's probably the project of my life. And it is also the dream of many others. Today I'm glad to announce that Iris Technologies, are glad to announce the technology of the century. The Iris device which will transform science fiction, into reality. Something like Aincrad and Nerve gear are ...
That's probably the project of my life. And it is also the dream of many others. Today I'm glad to announce that Iris Technologies, are glad to announce the technology of the century. The Iris device which will transform science fiction, into reality. Something like Aincrad and Nerve gear are closer than what you never think of. So how does that "magic" happens? With a good amount of economical help, we would create a server capable of managing tons of data. We will call this server Retina from so on. Now, lets imagine two things, an apple and a source light, which some of its rays hit that apple. Now, lets see what happens at a subatomic level at a superficial rate. The photons there would collision with the magnetic field of the atoms of the apple, creating some derivations on its direction(calculated thanks to Vector3 reflection method on Retina as traditional ray paths/tracing would do), power (some losts on its speed) and particle wave length. Now imagine a lot of photons collision at all the surface of that apple, generating those 3 changes constantly. Next, lets add a receptor of photons, an eye, which will capture all those reflected photons from the apple. Those photons would collision at the iris, generating from such collisions a series of electrons which the cornea nerve will capture, to a brain, in that case, the player's brain, but where is it? Well, unfortunately the brain is outside of the Retina virtual processed world with such fancy photons, so how could we build a bridge between reality and Retina? Conversions! It's known that depending on such properties the photons inherit, our brain processes the output image on a different way. That difference is known as colors, and fortunately, they are constant, which is the key for us to process them. Lets encode that to a color system that a machine can interpret, such as the old but gold RGBA, creating a virtual entity which won't interact with the photons properties(write'em) but instead, just act as a reader, to extract the information of the photons which collision at the virtual Iris). Once the RGBA output is calculated, we can send it from Retina to the Iris device linked with such eye id (multi-player systems will have multiple eyes, so there's an id to identify which RGBA packed data will correspond to which eye, allowing a correct send to the Iris device with such ID). But hey, the result would be rasterizing the output at a screen, and we don't want it. We want to send that data to the user brain directly, allowing total immersion. So lets decode that RGBA again into that photon properties, but unfortunately, the brain at this point won't have any idea about what a photon is, because as i already explained, brains only work with electromagnetic impulses (electrons). Also there's another problem which is at the same time, the solution, the real eye. The real eye would be sending constantly data to the brain from the real life photons, so we will need to intercept that signal to don't vause our poor brain mixing both electromagnetic impulses, so let's overwrite that signal. Fortunately for us, there already existed solutions to generate an electromagnetic wave in certain power and direction, so we send that electrons to some point at the cornea nerve, intercepting the electrons from the real eye and giving the ones from the source light the way to our beloved brain and... VOILÁ!!! We would be seeing the virtual apple as it would come from our real eye.
Accept friend request
You are friends