Layer 2 vs Layer 3 switches

I am a bit confused on when I need and should use a layer 2 or layer 3 switch. In the corporate network we have Aruba 3810 core switches, and all the access switches are HPE 1950 24/48 PoE+. We have 7 vlans, IT mgmt, workstations, machines, wlan, wlan-guest, wlan-guest2, VoIP. Most of the the phones are having connected the computers connected to them (built in switch).

I know about routing capabilities of layer 3 switches. But… I don’t know what happens if I have a layer 2 switch like Aruba 2530 PoE+ connected, what changes and what do I miss. Only the routing between vlans? If routing has to be enabled for the vlans/devices that are connected on the 2530 it will be done on the core switches or the 1950s?

Thanks in advance!

How to apply masking layer to sequential CNN model in Keras?

I have a problem of applying masking layer to CNNs in RNN/LSTM model.

My data is not original image, but I converted into a shape of (16, 34, 4)(channels_first). The data is sequential, and the longest step length is 22. So for invariant way, I set the timestep as 22. Since it may be shorter than 22 steps, I fill others with np.zeros. However, for 0 padding data, it’s about half among all dataset, so with 0 paddings, the training cannot reach a very good result with so much useless data. Then I want to add a mask to cancel these 0 padding data.

Here is my code.

mask = np.zeros((16,34,4), dtype = np.int8)   input_shape = (22, 16, 34, 4)   model = Sequential()   model.add(TimeDistributed(Masking(mask_value=mask), input_shape=input_shape))   model.add(TimeDistributed(Conv2D(100, (5, 2), data_format = 'channels_first', activation = relu), name = 'conv1'))   model.add(TimeDistributed(BatchNormalization(), name = 'bn1'))   model.add(Dropout(0.5, name = 'drop1'))   model.add(TimeDistributed(Conv2D(100, (5, 2), data_format = 'channels_first', activation = relu), name ='conv2'))   model.add(TimeDistributed(BatchNormalization(), name = 'bn2'))   model.add(Dropout(0.5, name = 'drop2'))   model.add(TimeDistributed(Conv2D(100, (5, 2), data_format = 'channels_first', activation = relu), name ='conv3'))   model.add(TimeDistributed(BatchNormalization(), name = 'bn3'))   model.add(Dropout(0.5, name = 'drop3'))   model.add(TimeDistributed(Flatten(), name = 'flatten'))   model.add(GRU(256, activation='tanh', return_sequences=True, name = 'gru'))   model.add(Dropout(0.4, name = 'drop_gru'))   model.add(Dense(35, activation = 'softmax', name = 'softmax'))   model.compile(optimizer='Adam',loss='categorical_crossentropy',metrics=['acc']) 

Here’s the model structure.

_________________________________________________________________   Layer (type)                 Output Shape              Param #      =================================================================   time_distributed_4 (TimeDist (None, 22, 16, 34, 4)     0            _________________________________________________________________   conv1 (TimeDistributed)      (None, 22, 100, 30, 3)    16100        _________________________________________________________________   bn1 (TimeDistributed)        (None, 22, 100, 30, 3)    12           _________________________________________________________________   drop1 (Dropout)              (None, 22, 100, 30, 3)    0            _________________________________________________________________   conv2 (TimeDistributed)      (None, 22, 100, 26, 2)    100100       _________________________________________________________________   bn2 (TimeDistributed)        (None, 22, 100, 26, 2)    8            _________________________________________________________________   drop2 (Dropout)              (None, 22, 100, 26, 2)    0            _________________________________________________________________   conv3 (TimeDistributed)      (None, 22, 100, 22, 1)    100100       _________________________________________________________________   bn3 (TimeDistributed)        (None, 22, 100, 22, 1)    4            _________________________________________________________________   drop3 (Dropout)              (None, 22, 100, 22, 1)    0            _________________________________________________________________   flatten (TimeDistributed)    (None, 22, 2200)          0            _________________________________________________________________   gru (GRU)                    (None, 22, 256)           1886976      _________________________________________________________________   drop_gru (Dropout)           (None, 22, 256)           0            _________________________________________________________________   softmax (Dense)              (None, 22, 35)            8995         =================================================================   Total params: 2,112,295   Trainable params: 2,112,283   Non-trainable params: 12   _________________________________________________________________ 

For mask_value, I tried with either 0 or this mask structure, but neither works and it still trains through all the data with half 0 paddings in it.
Can anyone help me?

Problem with business and data access layer design

I am creating a library to interact with third party api.This library will be wrapper around that third party library and i want to expose my wrapper methods to client(webapi,winform,console,mvc etc..).

Below are the methods that i want to expose to my clients to perform operations using third party apis but i dont want to give them direct access.They will always use my wrapper api to perform operations.

public interface IMyLibraryWrapperApi     {              //methods expost to clients          int AddRegion(RegionRequest request);     }      public class MyLibraryWrapperApi : IMyLibraryWrapperApi     {            private readonly IThirdPartyUnitOfWork _thirdPartyUnitOfWork;         private readonly IRegionServices _regionService;         public MyLibraryWrapperApi(string domain, string username,string password)         {             this._thirdPartyUnitOfWork = new ThirdPartyUnitOfWork(domain, username,password);             this._regionService = new RegionServices(_thirdPartyUnitOfWork);         }         public int AddRegion(RegionRequest request)         {            return _regionService.CreateRegion(RegionRequest request);         }     } 

Service/Business Layer :

public class RegionService : IRegionService     {         private readonly IThirdPartyUnitOfWork _thirdPartyUnitOfWork;         public RegionService(IThirdPartyUnitOfWork thirdPartyUnitOfWork)         {             this._thirdPartyUnitOfWork = thirdPartyUnitOfWork;         }         public void CreateRegion(RegionRequest request)         {             _thirdPartyUnitOfWork.Insert(request.Name,request.Direction);         }     } 

DataAccess Layer :

public interface IThirdPartyUnitOfWork      {         int Insert(string name,string direction);         T GetById(int id);     }      public class ThirdPartyUnitOfWork : IThirdPartyUnitOfWork, IDisposable     {         private readonly ServiceContext _serviceContext;         public ThirdPartyUnitOfWork(string domain, string username, string password)         {             _serviceContext = new ServiceContext(domain);             _serviceContext.Credentials = new ThirdPartyCredentials(username, password);         }         //Insert method implementation         //GetById method implementation     } 

Now I want that IMyLibraryWrapperApi should always interact with Service Layer and Service layer will interact with data access layer but here as you can see that IThirdPartyUnitOfWork is being exposed in IMyLibraryWrapperApi and even any client can call IThirdPartyUnitOfWork which i dont want.

But with current design I am not getting how to design this layer properly so that they do not leak in to other layers.

Can anybody please help me with some suggestions to improve this design

Unable to crawl content source due to “The secure sockets layer (SSL) certificate sent by the server was invalid and this item will not be crawled”

We have SharePoint 2013 SP1, external resource in corporate subnet ( is set as a content source in SharePoint farm search service application (

Ignore SSL warning is set to Yes in Search farm administration.

When trying to crawl the content source, the following error occurs:

The secure sockets layer (SSL) certificate sent by the server was invalid and this item will not be crawled. (0x80041223)

No further ULS logs are shown, even if level is set to Verbose.

I have the only one idea that due to wildcard type of certificate it is not recognized by SharePoint (see screenshot).

Wildcard certificate


This is how Chrome displays the certificate (including SAN): enter image description here

Correct approach to pass data to service layer

I’m curious what’s considered the (best) correct way to pass data to a service layer in Core. Say I have a Person entity that has a relation to a Image (profile picture) entity and another relation to a Address entity.

Should the signature of the service method look like this:

public Person CreateCustomer(Person person, Address address, Image profilePicture)

Or would it make more sense to put all the needed data in an interface:

public Person CreateCustomer(ICustomerDetails details)

Using the last approach, I would be able to make my ViewModel inherent from ICustomerDetails and I could pass the ViewModel from the controller directly to the service layer.

What approach would be most advisable in this situation.

Nested v-for loop on firestore data reactively duplicates deepest layer on changes

I have nested data in firestore, and I want to represent this as a nested list. However, when I change a value in the firestore database, the second level of values are not updated.

On load it looks correct: enter image description here

But when I change “Temperature1” to “Temperature”, it updates like this: enter image description here

If I reload the page it all loogks correct again: enter image description here

If I change the top level name, e.g. Cactus, it behaves as expected (the name gets updated reactively).

How can I get the second level strings to also be simply reactively updated without duplication??

This is my data structure in firestore: (id are auto generated ids)

-users (collection) |-id0 (doc)  |-name: "Niels"  |-tiles (collection)   |-id1 (doc)   | |-name: "Monstera Deliciosa"   | |-services (collection)   |   |-id2 (doc)   |   | |-name: "Temperature"   |   |-id3 (doc)   |     |-name: "Relative-humidity"   |-id4 (doc)     |-name: "Cactus"     |-services (collection)       |-id2 (doc)         |-name: "Soil moisture" 

Here are excerpts of my code:

My Vue component (which is the main view): Dashboard.vue

<template>     <div id="dashboard">         <ul v-for="tile in tiles">             <li>{{ }}</li>             <ul v-for="service in">                 <li>{{}}</li>             </ul>         </ul>     </div> </template>  <script>     import { mapState } from 'vuex'     const fb = require('../firebaseConfig.js')      export default {         computed: {             ...mapState(['tiles'])         }     } </script> 

And my store.js file:

import Vue from 'vue' import Vuex from 'vuex' const fb = require('./firebaseConfig.js')  Vue.use(Vuex)  // handle page reload fb.auth.onAuthStateChanged(user => {     if (user) {         // realtime updates from tiles         fb.usersCollection.doc(user.uid).collection('tiles').orderBy('name', 'asc').onSnapshot(tilesSnapshot => {             let tilesArray = []              tilesSnapshot.forEach(tileDoc => {                 let tile =        =                 // console.log("Tile name:",                  let servicesArray = []                 tileDoc.ref.collection('services').onSnapshot(servicesSnapshot => {                     servicesSnapshot.forEach(serviceDoc => {                         let service =                =                         servicesArray.push(service)                     })                 })        = servicesArray                 tilesArray.push(tile)             })             store.commit('setTiles', tilesArray)         })     } })  export const store = new Vuex.Store({     state: {         tiles: []     },     mutations: {         setTiles(state, val) {             if (val) {                 state.tiles = val             } else {                 state.tiles = []             }         }     } }) 

Link Pyramids For SEO- Best 3 Layer Blog Theme Backlinks Package for $19

Blog Networking Is The Best SEO Link Building Procedure and 3 Layer Link Pyramids Of WEB2.0 Blogs Will Finish Your SEO Project. I’m Introducing Here One Of The Most Powerful Packages At Lowest Price In Link Pyramids Category. Blog Them Link Pyramids Contains Of 3 Tiers Of WEB2.0 Blogs Backlinks Powered By 2 Layer Of Social Networds And Article Directories. More About This Service: Tier 1: Contains 50 WEB2.0 Blogs PR2-8 Plus 20 Social Network Posts All Niche Relevant. Tiere 2: Contains 50 WEB2.0 Profiles Plus 50 WEB2.0 Blogs. Tiere 3: Contains 500 Article Directory PR 2-8 Plus 50 Social Network Posts + 30 Social Bookmarking. What Keyword Difficulty Is This Service Suitable: Will Rank All Keyword With hardness Up to 35 based On Keyword Finder Ranking Algorithm.

by: rayanmehr
Created: —
Category: Link Pyramids
Viewed: 237

Utils, Service class and Persistence Layer

I have a method in API handler which does API validation, performs business logic and then makes a call to db. Is it a good idea to move the business logic to utility class or service class?

IMO the business logic should be moved to service as utility class are supposed to have methods which can be shared across applications. If I move the business logic to utility class then it will cause a dependency between accessor and utility which will also make it difficult to unit test.

Pls let me know if my reasoning above is correct.