Author Topic: Star Citizen General BS  (Read 798613 times)

nightfire

  • Full Member
  • ***
  • Posts: 212
Re: Star Citizen - The Game
« Reply #195 on: February 23, 2017, 01:36:38 PM »
Buckle up for another session of Vogon Poetry:




Maybe they should drop the C in "Cloud Imperium Games"?  :ohdear:

dsmart

  • Supreme Cmdr
  • Administrator
  • Hero Member
  • *****
  • Posts: 4913
    • Smart Speak Blog
Re: Star Citizen - The Game
« Reply #196 on: February 23, 2017, 01:41:11 PM »
Ho Lee Cow!! I'm ded.

https://www.reddit.com/r/starcitizen/comments/5vnq5m/261_so_far_for_me/?st=izisnwh6&sh=51ee9498

Quote
it crashed my game so hard that the sound started coming out of my monitor rather than my headset
Star Citizen isn't a game. It's a TV show about a bunch of characters making a game. It's basically "This is Spinal Tap" - except people think the band is real.

nightfire

  • Full Member
  • ***
  • Posts: 212
Re: Star Citizen - The Game
« Reply #197 on: February 23, 2017, 03:17:32 PM »
32Bit Range = 2^31 − 1 = 2,147,483,647
Positional Data: (unsigned) Vector3(32b_X, 32b_Y, 32b_Z)[1cm Scale], Vector3(32b_X, 32b_Y, 32b_Z)[1.000km Scale], Vector3(32b_X, 32b_Y, 32b_Z)[1.000.000.000.000km Scale]
Positional Data max: 1.000.000.000.000.000.000.000km with a precision of 1cm in all axes
this is without touching the range of floating point error because I used only half of the 32bit total range (factor 1.000.000.000)

Whoaaahhh .. triple precision that musste be double double float!!! ... no it isnt't ... it's 3times 32bit which renders to 34bit (with one 32bit value left over) NOT 64 or 128 bit ...

I hear you, but I can't quite figure out your calculation yet. Let me start with the above end result:

Our example coordinate system shall have a length of 1.000.000.000.000.000.000.000km, or 10^21 km per axis. The smallest unit of resolution shall be 1cm, so there are 10^26 units(cm) per axis (1km = 10^5cm).

In my understanding, entropy law dictates that a minimum of log2(x) storage bits will be required to represent x units in a discrete (integer) storage layout. In this case, log2(10^26) = 86.37. So at least 87 bits per axis would be needed to store every arbitrary coordinate value between 0…10^26-1 cm without loss of precision.

If we chose a floating-point storage layout, my understanding is that 10^26 = 27 "significant digits" would need to be stored in our case. Assuming the IEEE 754 floating point standard, the smallest layout which could accommodate this is quadruple-precision (128 bits, and 33 significant digits), since double-precision (64 bits) can only cope with 15 significant digits.

So at this stage I don't understand your conclusion yet that it's possible to represent 10^26 discrete values (units/cm) in 34 bits of storage, as it appears to me that it's not possible to get away with less than 87 bits. :confused: Please walk me through this part of the argument once more.

Narrenbart

  • Jr. Member
  • **
  • Posts: 99
Re: Star Citizen - The Game
« Reply #198 on: February 23, 2017, 06:03:27 PM »
[...]

So at this stage I don't understand your conclusion yet that it's possible to represent 10^26 discrete values (units/cm) in 34 bits of storage, as it appears to me that it's not possible to get away with less than 87 bits. :confused: Please walk me through this part of the argument once more.
This is true if you only want to have one variable to store it all per axis

(I just found a small error that I will correct in this example)
I build multiple coordinate systems like a Matrjoschka.

The first 32bit coordinate system is doing 0.000001km (1cm) to 10.000km with three signed 32bit Variable I have 20.000km with a precision of 1cm now in every direction
(one 32Bit per axis)

Now I put my coordinate system in a larger one
The second 32bit coordinate system is doing 10.000km to 10.000.000.000.000km with a precision of 10.000km lets call it the sector Variable if you enter a new sector the subsector variable can be used for the new one, you just need to flip the axis (basically you know exactly where the player entered the sector and can use the 1cm precision accordingly) now we have a max of 20.000.000.000.000km with a precision of 1cm.
"flip the axis" like: - subsector maxY is reached - increase sectorY by 1 - set subsector to minY -
(two 32Bit per axis)

now I put my Sector system in a larger one and call it the universe variable
the third 32bit coordinate system is doing 10.000.000.000.000km to 10.000.000.000.000.000.000.000km with a precision of 10.000.000.000.000km if the sector max is reached on any axis I increase the universe variable of this axis and set the sector and subsector to min.
(three 32Bit per axis)

now I put my Universe system in a larger one and call it Galaxy variable
the fourth 32bit coordinate system is doing 10.000.000.000.000.000.000.000km to 10.000.000.000.000.000.000.000.000.000.000km with a precision of 10.000.000.000.000.000.000.000km
(four 32Bit per axis)
struct myPosition = Vector3(subsectorX(32bit), subsectorY(32bit), subsectorZ(32bit)), Vector3(sectorX(32bit), sectorY(32bit), sectorZ(32bit)), Vector3(universeX(32bit), universeY(32bit), universeZ(32bit)), Vector3(galaxyX(32bit), galaxyY(32bit), galaxyZ(32bit))

[Edit: with reflecting the US numbersystem I am at (signed)10^31 *2km which is 20Nonillion kilometers :D but to avoid floating point errors I would scale it a little bit down :)]

And I am now at 10^31km (there was my error in my last calculation I forgot somewhere some zeros) with a precision of 1cm If I am reaching a border of a coordinate system I increase the overlay coordinate system and set the underlying system to zero. for 10^31 I need four 32bit Variables per Axis which can be projected as one 34bit if you want (64bit contains 2,147,483,647 32bit variables)

With five 32 bit variables I would be at 10^40km range which is 10 duodecillion.

WAHHHH I just read that you americans have other number names ...
Europe 1 Sextillion = 10^36
US 1 Sextillion = 10^21
so for 20 US sextillion I just need 3 coordinate systems

although you would need a 87bit variable to store ALL subsector, sector, universe and galaxy data at once (with a live precision of 1cm for EVERY cm including if there is nothing) which is senseless because there is only one you and all other systems would be empty, you would only need this if you can see everything in all galaxies at once (in other words if you are a God you'll need 87bit at least) - instead of this we reuse the coordinate system because we only need one position for every object - 64bit would be an almost godly waste at this point.
« Last Edit: February 23, 2017, 06:28:03 PM by Narrenbart »

dsmart

  • Supreme Cmdr
  • Administrator
  • Hero Member
  • *****
  • Posts: 4913
    • Smart Speak Blog
Re: Star Citizen - The Game
« Reply #199 on: February 23, 2017, 06:24:30 PM »
I just stick with 64-Bit co-ords, implement a floating origin. And go take a nap. It works* just fine; and has larger regions than any wet dream that CIG can cook up.

*

Star Citizen isn't a game. It's a TV show about a bunch of characters making a game. It's basically "This is Spinal Tap" - except people think the band is real.

Narrenbart

  • Jr. Member
  • **
  • Posts: 99
Re: Star Citizen - The Game
« Reply #200 on: February 23, 2017, 06:26:18 PM »
as I said before KSP is fine with a sub coordinate system and an overlay coordinate system - CIG likes BIG numbers (Big Universes, Big Ships, Big Community Managers, Big bugs)

Lir

  • Newbie
  • *
  • Posts: 34
Re: Star Citizen - The Game
« Reply #201 on: February 24, 2017, 06:39:31 AM »
32Bit Range = 2^31 − 1 = 2,147,483,647
Positional Data: (unsigned) Vector3(32b_X, 32b_Y, 32b_Z)[1cm Scale], Vector3(32b_X, 32b_Y, 32b_Z)[1.000km Scale], Vector3(32b_X, 32b_Y, 32b_Z)[1.000.000.000.000km Scale]
Positional Data max: 1.000.000.000.000.000.000.000km with a precision of 1cm in all axes
this is without touching the range of floating point error because I used only half of the 32bit total range (factor 1.000.000.000)

Whoaaahhh .. triple precision that musste be double double float!!! ... no it isnt't ... it's 3times 32bit which renders to 34bit (with one 32bit value left over) NOT 64 or 128 bit ...

I hear you, but I can't quite figure out your calculation yet. Let me start with the above end result:

Our example coordinate system shall have a length of 1.000.000.000.000.000.000.000km, or 10^21 km per axis. The smallest unit of resolution shall be 1cm, so there are 10^26 units(cm) per axis (1km = 10^5cm).

In my understanding, entropy law dictates that a minimum of log2(x) storage bits will be required to represent x units in a discrete (integer) storage layout. In this case, log2(10^26) = 86.37. So at least 87 bits per axis would be needed to store every arbitrary coordinate value between 0…10^26-1 cm without loss of precision.

If we chose a floating-point storage layout, my understanding is that 10^26 = 27 "significant digits" would need to be stored in our case. Assuming the IEEE 754 floating point standard, the smallest layout which could accommodate this is quadruple-precision (128 bits, and 33 significant digits), since double-precision (64 bits) can only cope with 15 significant digits.

So at this stage I don't understand your conclusion yet that it's possible to represent 10^26 discrete values (units/cm) in 34 bits of storage, as it appears to me that it's not possible to get away with less than 87 bits. :confused: Please walk me through this part of the argument once more.

Lol guys, you should not worry too much about that, with CIG the result will always be = pizza

nightfire

  • Full Member
  • ***
  • Posts: 212
Re: Star Citizen - The Game
« Reply #202 on: February 24, 2017, 01:32:13 PM »


although you would need a 87bit variable to store ALL subsector, sector, universe and galaxy data at once (with a live precision of 1cm for EVERY cm including if there is nothing) which is senseless because there is only one you and all other systems would be empty, you would only need this if you can see everything in all galaxies at once (in other words if you are a God you'll need 87bit at least) - instead of this we reuse the coordinate system because we only need one position for every object - 64bit would be an almost godly waste at this point.

Ok, NOW I get it. It was the way of subdividing and reusing the coordinate system which I didn't get entirely in the first run. Thanks for clarifying!  :science:

nightfire

  • Full Member
  • ***
  • Posts: 212
Re: Star Citizen - The Game
« Reply #203 on: February 24, 2017, 01:34:54 PM »
Lol guys, you should not worry too much about that, with CIG the result will always be = pizza

Agreed, I just want to make sure that I'm not seeing pineapple pizza when everyone else is talking about seafood pizza  :D

dsmart

  • Supreme Cmdr
  • Administrator
  • Hero Member
  • *****
  • Posts: 4913
    • Smart Speak Blog
Re: Star Citizen - The Game
« Reply #204 on: February 27, 2017, 05:39:55 PM »
Oh yeah, yeah 6, "building a small design team" Because yeah, that's perfectly normal.

Interview with Brian Chambers

Star Citizen isn't a game. It's a TV show about a bunch of characters making a game. It's basically "This is Spinal Tap" - except people think the band is real.

dsmart

  • Supreme Cmdr
  • Administrator
  • Hero Member
  • *****
  • Posts: 4913
    • Smart Speak Blog
Re: Star Citizen - The Game
« Reply #205 on: February 28, 2017, 12:29:24 PM »
Ah yeah, remember back when we said that the Star Citizen networking kernel was getting worse? Right, you did. So go see just how bad it really is now in 2.6.1

« Last Edit: February 28, 2017, 04:08:34 PM by dsmart »
Star Citizen isn't a game. It's a TV show about a bunch of characters making a game. It's basically "This is Spinal Tap" - except people think the band is real.

dsmart

  • Supreme Cmdr
  • Administrator
  • Hero Member
  • *****
  • Posts: 4913
    • Smart Speak Blog
Re: Star Citizen - The Game
« Reply #206 on: March 02, 2017, 10:39:38 AM »
Star Citizen isn't a game. It's a TV show about a bunch of characters making a game. It's basically "This is Spinal Tap" - except people think the band is real.

nightfire

  • Full Member
  • ***
  • Posts: 212
Re: Star Citizen - The Game
« Reply #207 on: March 02, 2017, 11:08:41 AM »
Ah yeah, remember back when we said that the Star Citizen networking kernel was getting worse? Right, you did. So go see just how bad it really is now in 2.6.1

You obviously don't understand game development. Star Citizen currently is still in alpha and the gameplay isn't optimized yet. Performance will get a lot better once the netcode is fixed, and we'll see a huge leap forward once 3.0 is released. Also, network lag has already improved a lot since they moved to Lumberyard and AWS regional servers. My guess is that the Youtube reviewer just didn't connect to the right server, probably because he doesn't understand game development either :D :D

dsmart

  • Supreme Cmdr
  • Administrator
  • Hero Member
  • *****
  • Posts: 4913
    • Smart Speak Blog
Re: Star Citizen - The Game
« Reply #208 on: March 02, 2017, 02:47:21 PM »
I had written (1, 2, 3) about the AWS stuff before, so this post is just a placeholder so that everything is in one place for deep linking. Bookmark it. Then wait.


In the latest newsletter, regional server instances are coming in the 2.6.1 patch (due out Feb 17th)

Quote
Star Citizen Newsletter - Regional Servers Inbound
February 3rd, 2017

Greetings Citizen.

Across all our studios, work on the upcoming Alpha 2.6.1 patch is progressing nicely. There’s still some UI work to complete and stability issues to iron out, but, as you can see in our updated production schedule report, we’re almost ready to get this latest patch into the players’ hands. In fact, we're happy to announce an addition to this patch. Thanks to the great work by the Live Ops, Backend and UI teams, we're moving up the release of the Regional Servers to 2.6.1, so players will be able to choose which server (North America, Europe, or Australia) they join to ensure the best connection possible. Once these are running, we’ll be able to run more tests to assess whether more locations will be needed.

This week I split my time between Foundry 42 offices in the UK and Germany. I’ll be spending another week in the trenches with the devs at Foundry 42 to oversee our advancement on a number of fronts.

Thanks to everyone who showed their support for Star Citizen last weekend at both PAX South and the community-organized Bar Citizen event in San Antonio, TX. It’s just another example of how dedicated and inspiring our fanbase can be. In fact, we’ve been looking for more ways to to bring the community front and center. That’s why this week we premiered a new show called Citizens of the Stars that focuses on the important part you play in Star Citizen. Give it a watch to see some of the incredible things the community is doing.

-- Chris Roberts

Basically, using the AWS support in LumberYard, they can do this now. They couldn't do it before with Google Compute Engine because they'd have to write an ass-ton of code to do it. Amazon has done it for them via their AWS->CryEngine->Lumberyard implementation. Which is one of the things I wrote about in my recent Irreconcilable Differences blog in which I discuss the Lumberyard switch.

Forget about fragmentation of their already dwindling player base; the AWS cloud instances won't cost them anything if nobody is connecting to them. In fact, all it will cost them is whatever the AWS bandwidth costs to update them. And since each patch is like 40GB, well then.

What's going to be absolutely hilarious is if they don't enable (in the UI) the ability to select an AWS instance to connect to. which means that if you are in Australia and can't find players, there won't be any way to switch to US based instances which would obviously be more populated. Much rage will be heard.

This is really just another check mark in their pledge promise sheet. Only about a few hundred more to go.

Oh, and lest we forget, some of the backers are rejoicing "regional servers", while forgetting the fact that promises that Chris Roberts made about "1000 player instances" are never - ever - going to happen. And it certainly isn't going to happen with regional AWS instances. Have fun with your sub-par 16 player instances (not to be confused with the higher 24 client allowed in the shopping hub).

And if they are in fact implementing LumberYard GameLift, my reaction ---->  :laugh:



Quote
I still have no idea how 1000+ will be technically possible, but I know sod-all about game development.

And that quoted statement doesn't make a huge amount of sense, unless they're having instances ("servers") within instances ("instances") in which case it's still instanced, just called something different.

It's all rubbish tbh.

An "instance" is just a copy of the entire game. It came to be when describing a single server (hardware) running multiple copies (instances). Even a single server running a single copy of the game, is a "dedicated server instance"

And cloud servers are no different, except a GCE|AWS instance is just a software copy running on hardware servers and with no access to physical machines.

e.g. LoD runs only on hardware servers (co-lo at a data center). And we run separate "scenes" (aka levels) each with the ability to handle 1-256 clients. Each server is powerful enough to handle multiple scenes. So we can run either n+1 space scenes on a server or just 1. In short, the hw server is hosting the instances.

And the way it's all connected is based on architecture we built specifically so that we could control the number of clients in each scene. So if a scene has a client cap (which is server-side configured), no more clients can connect to it until one client drops or leaves. And all scenes are connected in such a way that it all appears as one universe (though it's just 13 connected scenes stitched together with magic). A player going from a space scene on one server to a planetary scene on another server, doesn't notice anything, as it's just an IP connection via a jump gate. And during the jump handshake, if the target server is full or off-line, the connection is rejected, you get a message - and you stay were you are and try again later.

Also, a single hw server runs a number of scene instances depending on their resource requirement e.g. space scenes don't have as much stuff as planets; so we can run 2-4 space scenes on 1 server, while running 1-2 planetary scenes on another server. Our scenes are of 3 types. space (x4), planet (x4), interiors station|capital ship (x5).

There is no gain to having 1000 clients in an instance if the performance is just going to suffer, thus yielding a horrid experience for gamers. And even if you did it, the bandwidth costs alone - especially on cloud instances - would be cost prohibitive.

When running based on an architecture like ours, not only do you get around the n+1 client issue, but player-player comms is a non-issue because it's all one universe. You can be in a scene instance (e.g. space) and communicate with someone in another instance (e.g. planet). Sure, you won't see them due to distance and location, but you can still communicate with them. And if by some fluke a 256 scene instance ends up being full, unless all of them are within a certain range bubble, the packets are priorititized based on proximity.

And we don't have the problem of "grouping with friends" because it's all one cohesive universe. No matter where or when you connect, you will always find your friends; and can join them as long as the scene they are on isn't pop-locked.

A small team of renegade indies, led by a semi-retired mad man, built this. In a span of under two years. And it just works. To the extent that, if you look at our changelog, we haven't messed with networking in over three years. And never underestimate the power of AI bot clients to use for load balancing and testing.



Quote
Hehe, I am generally aware of the concepts* and to be honest I'd rather that the term "Cloud" was replaced by "Someone Else's Computer" as it sounds a hell of a lot less magical.

The whole 1000+ simultaneous players thing makes no sense unless you can do some very clever peer-to-peer + view distance stuff as network traffic increases exponentially otherwise. Even if they paid for the computing horsepower, connectivity is always the bottleneck. I suppose that you could do other clever things with shuttling people between instances dependent on criteria like location/neighbouring entities/etc but that would be a nightmare to handle without lagging. All of this at a high-tick rate? yeah.. no.

*I received my BSc in Computer Science before the WWW existed (1994!) but ended up going down the corporate IT route so am not really involved in cutting edge stuff. I can still do the maths though!

It remains the Holy Grail for online connectivity in terms of twitch games. There is a reason that companies with vast resources, still rely on instanced game sessions - even MMOs.

The Planetside games which are twitch based and tout the largest number of clients in a session, still lagged - badly - when > 32 clients were in the general vicinity. And when they went for the GBWR record for the most clients connected to a session, it was unplayable. The record was about connectivity - not playability.

Eve Online - which isn't twitch based - literally invented a mass of software to host their game. And even so, when it's heavily populated in an area, they use time-dilated updates to keep every one in sync.

The only time that "1000 client instances" makes sense, is if they somehow - automagically solve the n+1 connectivity problem. Considering the clown shoes involved in the project, that's highly unlikely.  Again, we're in year 6 and they haven't progressed beyond standard networking in the original CryEngine. So there's that.

The thing with cloud servers like AWS & GCE is that you can do all kinds of nifty things. But they were never designed for the demands of twitch based games. That's why very few use them. Heck, even some of my friends working on games for Microsoft with Azure, are finding this out. See the upcoming Crackdown game.

Basically, you can't have "1000 client instances". What you can have are "1000 client sessions" via inter-instance communications. This - which is basically rocket science - means something like this:

i1(n+250) // instance + client count
i2(n+250)
i3(n+250)
i4(n+250)

Those are 4 are Amazon EC2 Dedicated Hosts running in Intel Xeon hardware server clusters. Also see the AMI requirement and what an EC2 is. You can also use the free tier to test your app before jumping off a cliff and actually doing it.

This is the part where panic mode sets in. See those instance types, bandwidth caps etc? Yeah.

Without getting technical, with my above example you have a situation whereby they have to create 4 (or more) instances (copies) of the game.

i1 goes live, then gradually fills up with clients. As it gets filled up, because AWS charges for BOTH in/out bandwidth, the more clients, the higher the costs. It's a lot scarier than that.

i2, i3, i4, all go live - same as above.

Nobody in i1 is going to see or interact with anyone in the other instances. Even if you imagine this being a walled off garden whereby i1-client1 is parked on the edge, he will never see i2-client1. They can't see, shoot, or interact with each other. For all intent and purposes they know nothing about each other.

In order to have "1000 client" instances, you need to have 1000 clients in an instance. Which would mean 1000 clients being able to connect and interact with each other in the above. It's IMPOSSIBLE. Period. End of story. And there isn't a single Xeon hardware server on AWS which would somehow automagically spawn an instance configured for 1000 clients in a twitch based game.

If you "stitch" the instances using clever tricks, such that you have 4 instances each with 250 clients, it's no longer "1000 client" instance, but rather a "1000 client" cluster. And in order to give the illusion of 1000 clients in the world, you have to somehow come up with inter- and intra- instance communications such that, using the walled garden example above, all clients within range can somehow see, chat, engage each other.

Well guess what? Now you're in alchemy territory. You now have an instance whereby i1-client1 fires a missile at i2-client1 and that missile travels through the i1 instance, reaches an area where it is destroyed and appears (re-created) at in i2 at the location of i2-client1 <---- that fool has probably already buggered off, died etc by the time the server code figures out that i1 just fired off a missile at a target in a remote instance which may or may no longer exist.

It gets better. That missile, along with all the calculations for i1-client1 and i2-client1, need to be calculated (God help you if you aren't using server-side arbitration - which by SC isn't using) on-the-fly and in real-time. All the time. Think of the bandwidth.

Now multiply the horrendous notion above to n+1 for a set of clients.

Then plan to be on vacation when the AWS bill shows up for that month.

Here's the hilarious part. Instead of planning to build this from the start, much like Frontier did, they decided to just wing it. And now, six years later, they're still stuck with the basic CryEngine networking layer.

What is even more hilarious is that - right from the start - Chris (it's in the Kickstarter, interviews etc) claimed he wasn't making an MMO. Then, out of the Blue, he was. Suspiciously that was after it dawned on them that they would make more money by selling the entire Verse as an MMO through the sale of assets. They would never - ever - have been able to raise this much money for a single player or session based game. But the fact is, assuming they deliver (which imo they won't) both of these games, the multiplayer is going to remain as it is now. A session based instanced game which will need a witch doctor to get it to handle more than 16 (let alone 1000) clients in combat.

Further reading to see how experts who thought long and hard about this before designing it; but still ended up with a less-than stellar solution to a massive problem:

VERY basic guide for ED networking

AWS re:Invent 2015 | (GAM403) From 0 to 60 Million Player Hours in 400B Star Systems

This is why most of who do this stuff for a living, and with decades under our belt, simply can't fathom how they could possibly be making these FALSE statements. Especially when you consider that when this whole thing collapses, and the lawsuits start flying, these are the sort of statements that are going to end up coming back to haunt them.

ps: When it comes to Star Citizen, the claims of "1000 player instances" is pure fiction and rubbish.
Star Citizen isn't a game. It's a TV show about a bunch of characters making a game. It's basically "This is Spinal Tap" - except people think the band is real.

Narrenbart

  • Jr. Member
  • **
  • Posts: 99
Re: Star Citizen - The Game
« Reply #209 on: March 02, 2017, 04:00:04 PM »
And don't forget what has to be communicated - every explosion, every wing that's been cut off from a spaceship (with the exact position, momentum and the spawn of the new GO), every zone behaviour (cause they interchange with each other due to latest dev chat), every physic body be it a mug in a ship or a bug on a planet and the players and the weapons and the bullets that'll be shot from one "physic grid" to another ... everything in realtime with ever changing pseudo 64bit 6DOF positional vectors.

One player in his ship in SC equals 100 Players in Planetside network data wise - well at the moment they can handle .. ermh 24 players? and only the player (ship) to be handled ... no mugs, bugs or all the other fancy stuff ...
It will be a bad day for the cult when they have to realise that all the immersion stuff is not manageable by any network/server structure in this world - let alone cloud servers ... let alone the feature packed but slow AWS cloud servers ...

 

SMF spam blocked by CleanTalk