If I need to store large matrix(2d array), then what should I do?

I am sorry that it can be a stupid question, but I have trouble to deal with my datasets. I build Deep learning model to predict my dataset, and my boss wanted to save this dataset, and the result of data with using MongoDB. (by the way, I don’t have some clustering environment.)

Input dataset has row names (unique to the dataset, but it’s possible that not unique to different datasets), column names, and (m, n) matrix which is sparse counter matrix, size can be over 2GB, even without counting 0s. The n can be a variable, and this column name can be different from the other datasets (It can be same, but expressed differently, something like a12, a12-tag, which needs parsing for using RDBMS). so I am trying MongoDB.

I am almost new to RDBMS, and clearly new to noSQL. First I tried to make document like this, since I knew nothing.
{
datasetName: ‘name’,
rowName: [rowNames],
columnName : [columnNames],
rowNum : m,
colNum : n,
matrix : (m, n) #put the counter matrix as list ,
}

Of course, I receive an error about BSON size limit. So I tried to make 2 collection that includes reference.
Collection 1 for metadata of dataset
{
datasetName: ‘name’,
rowName: [rowNames],
columnName : [columnNames],
rowNum : m,
colNum : n,
}
Collection 2 for having vector of each row.
{
rowName: rowName # only one row
vector : [Matrix[rowName]] # sparse form of vector of one row
referece : idOfInputData # reference of Input data
}

Then I got crushed by my boss, since I didn’t exploit the benefits of noSQL, he said I need to build new data model. Clearly, he wanted to save all the data at once, and without serialize, and converting binary.

For use cases, I don’t need to change column’s value often, since it’s for saving data, and loading only.

I was searching for some examples, but I couldn’t find useful one yet. How would I deal with large matrix in MongoDB? Should I use GridFS for this? If then, what is the benefits over simple file system?